title
stringlengths 1
827
⌀ | uuid
stringlengths 36
36
| pmc_id
stringlengths 6
8
| search_term
stringclasses 18
values | text
stringlengths 0
6.94M
|
---|---|---|---|---|
Health professionals’ perceptions, barriers and knowledge towards oral health care of dependent people in nursing homes: a systematic review | ed9afe4c-4c8d-4eb0-96d0-d97a38bb556d | 11798777 | Dentistry[mh] | Introduction The global shift to an older population continues to be one of the most significant societal changes of the 21st century, with the global population aged 65 years and older projected to exceed 1.5 billion by 2050 . As our population continues to age, the burden of chronic non-communicable diseases such as heart disease, cancer and musculoskeletal disorders will continue to increase . Oral diseases are no exception, and because they are often neglected, they continue to be a significant burden . Tooth loss increases with age. According to 2017–2020 National Center for Health Statistics data, 13.2% of seniors have no natural teeth Tooth loss can affect overall health and well-being. Edentulous older adults commonly experience compromised nutritional status, impaired speech function, and social discomfort, potentially leading to social isolation Seniors who have lost all of their teeth typically experience poor nutrition, difficulty speaking, and embarrassment, which can contribute to isolation . Nursing home residents, in particular, exhibit high rates of preventable or treatable oral/dental problems, including dental caries, gingivitis, periodontal disease, and gingival or oral discomfort and pain . The need to improve oral health care in nursing homes becomes even more urgent when we consider that the consequences of poor oral health are associated with an increased risk of malnutrition, aspiration pneumonia, respiratory disease, diabetes, and cardiovascular disease . Health care professionals, such as nurses and aides, serve as the primary health care providers in nursing homes. Not only do they spend a considerable amount of time with the older adult, but they also have a significant impact on their health care . Although nurses recognize the importance of promoting oral health in frail older adults , the literature highlights the inadequacy of oral health education and training for health care professionals . Unfortunately, dental health in older adults is often overlooked and remains an understudied area of research despite its importance in maintaining well-being, overall health, and quality of life . While there are scientific papers on oral health in nursing homes and institutionalized older adults, there are no systematic reviews on caregivers’ perceptions. This systematic review aims to evaluate caregivers’ perceptions of oral health care for dependent nursing home residents. The objectives were to summarize the methods used to assess barriers/difficulties, knowledge, training, available equipment, and perceptions of health care professionals regarding oral health care for dependent nursing home residents. Methods 2.1 Protocol and registration All authors drafted the protocol, registered it with the National Institute for Health Research PROSPERO ( http://www.crd.york.ac.uk/PROSPERO , ID number: CRD42024497782), and reported it according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) checklist . 2.2 Focused questions and eligibility criteria We developed a protocol to answer the following PICO question: “What are the perceptions of health care professionals regarding oral health care for dependent nursing home residents?.” The respective statements were as follows: P (Participants): Health care professionals caring for dependent older adults in long-term care facilities. I (Intervention): No intervention was applied, as the focus was on health care professionals’ perceptions and practices. C (Control): The presence or absence of a control group was not a limitation. (Outcome): The outcome was the assessment of the perceptions, barriers, difficulties, knowledge, training, and available equipment for performing oral health care, as reported by health care professionals. Cross-sectional observational studies were eligible for inclusion if they addressed the perceptions, difficulties, activities performed, and knowledge of health care professionals providing oral health care to dependent adults in long-term care facilities. Exclusion criteria were as follows: 1. duplicate studies; 2. abstracts, commentaries, reviews, letters to the editor, consensus, opinions, case studies, and case series; 3. unpublished information; 4. absence of the data being studied; 5. data obtained through a non-structured interview with non-comparable results; 6. population being family members as informal caregivers; and 7. articles written in languages other than English, Spanish, Portuguese, or French. There were no restrictions on the year of publication. 2.3 Data search strategy and study selection We searched PubMed through PubMed/MEDLINE, Web of Science, and LILACS for all relevant articles published until December 2023. The following search terms were used: (1) (care home OR nursing home OR residential OR caregiver* OR care facilities); (2) (elder* OR senior* OR old OR aged OR geriatric); (3) (oral health OR oral care OR oral knowledge OR health care). Two independent reviewers (J.P.L. and I.R.) performed the search and included studies. Two independent reviewers independently assessed the titles and/or abstracts of the retrieved studies in duplicate (J.P.L. and I.R.), and disagreements were resolved by discussion with a third author (J.C.). For measurement reproducibility, inter-examiner reliability following full-text assessment was calculated using the kappa statistic. 2.4 Risk of bias assessment The methodological quality of the eligible studies was assessed using the Newcastle-Ottawa Scale (NOS) , which was adapted for cross-sectional studies . This adapted version of NOS evaluates three major domains for potential sources of bias: (1) selection bias (methods of participant selection), (2) comparability bias (methods of controlling for confounding variables), and (3) outcome bias (methods of assessing outcomes). Each of the seven items on the scale is assigned a star, with a maximum of one star per item. In this review, both selection bias and outcome bias were of particular concern due to the reliance on self-reported data, which can introduce a range of biases, such as recall bias or social desirability bias. Therefore, we assessed whether studies adequately controlled for such biases by using validated tools, objective measures, or triangulation of data sources where possible. The risk of bias assessment was conducted by two researchers (J.P.L. and I.R.), with any disagreements resolved by consulting a third researcher (J.C.). If a study was deemed to have a high risk of bias in any domain, we noted this in the quality assessment summary and took it into account when interpreting the findings. 2.5 Data extraction process and data items Data extraction was performed independently by two reviewers (J.P.L. and I.R.), with discrepancies resolved through discussion with a third reviewer (J.C.). The following information was extracted from each eligible study: first author’s name, year of publication, country and location of sampling, sample size (male/female), mean age and mean years of experience, oral health perceptions of health care professionals, type of assessment, and study funding. For nurse perceptions, some specific information was collected from the studies for comparison: knowledge of dental terms/oral health; previous training to provide oral health care, type of training and perceived need for additional training; oral health care activities performed and availability of supplies to perform such care; access to oral health care by an oral health professional, perceived barriers/difficulties; and importance placed on oral health/relationship of oral health to systemic health. We recognize that this review relied on self-reported data (e.g., surveys or interviews) to assess health care professionals’ perceptions and practices. While self-reported data are commonly used in research of this nature, they introduce a potential source of bias, such as social desirability bias, where respondents may report behaviors or attitudes, they believe are more socially acceptable or expected. Additionally, recall bias may influence the accuracy of self-reports, particularly when participants are asked to reflect on past experiences or behaviors. These limitations were considered when assessing the overall quality of the studies, and we critically discuss their potential impact on the findings in the subsequent sections. For data analysis, standard spreadsheet software (Microsoft Excel for Mac, version 16.50. Microsoft, Redmond, WA, United States) was used to extract data. Frequencies and percentages were used to describe categorical variables, while continuous variables were reported as mean ± standard deviation (SD) and range. Protocol and registration All authors drafted the protocol, registered it with the National Institute for Health Research PROSPERO ( http://www.crd.york.ac.uk/PROSPERO , ID number: CRD42024497782), and reported it according to the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) checklist . Focused questions and eligibility criteria We developed a protocol to answer the following PICO question: “What are the perceptions of health care professionals regarding oral health care for dependent nursing home residents?.” The respective statements were as follows: P (Participants): Health care professionals caring for dependent older adults in long-term care facilities. I (Intervention): No intervention was applied, as the focus was on health care professionals’ perceptions and practices. C (Control): The presence or absence of a control group was not a limitation. (Outcome): The outcome was the assessment of the perceptions, barriers, difficulties, knowledge, training, and available equipment for performing oral health care, as reported by health care professionals. Cross-sectional observational studies were eligible for inclusion if they addressed the perceptions, difficulties, activities performed, and knowledge of health care professionals providing oral health care to dependent adults in long-term care facilities. Exclusion criteria were as follows: 1. duplicate studies; 2. abstracts, commentaries, reviews, letters to the editor, consensus, opinions, case studies, and case series; 3. unpublished information; 4. absence of the data being studied; 5. data obtained through a non-structured interview with non-comparable results; 6. population being family members as informal caregivers; and 7. articles written in languages other than English, Spanish, Portuguese, or French. There were no restrictions on the year of publication. Data search strategy and study selection We searched PubMed through PubMed/MEDLINE, Web of Science, and LILACS for all relevant articles published until December 2023. The following search terms were used: (1) (care home OR nursing home OR residential OR caregiver* OR care facilities); (2) (elder* OR senior* OR old OR aged OR geriatric); (3) (oral health OR oral care OR oral knowledge OR health care). Two independent reviewers (J.P.L. and I.R.) performed the search and included studies. Two independent reviewers independently assessed the titles and/or abstracts of the retrieved studies in duplicate (J.P.L. and I.R.), and disagreements were resolved by discussion with a third author (J.C.). For measurement reproducibility, inter-examiner reliability following full-text assessment was calculated using the kappa statistic. Risk of bias assessment The methodological quality of the eligible studies was assessed using the Newcastle-Ottawa Scale (NOS) , which was adapted for cross-sectional studies . This adapted version of NOS evaluates three major domains for potential sources of bias: (1) selection bias (methods of participant selection), (2) comparability bias (methods of controlling for confounding variables), and (3) outcome bias (methods of assessing outcomes). Each of the seven items on the scale is assigned a star, with a maximum of one star per item. In this review, both selection bias and outcome bias were of particular concern due to the reliance on self-reported data, which can introduce a range of biases, such as recall bias or social desirability bias. Therefore, we assessed whether studies adequately controlled for such biases by using validated tools, objective measures, or triangulation of data sources where possible. The risk of bias assessment was conducted by two researchers (J.P.L. and I.R.), with any disagreements resolved by consulting a third researcher (J.C.). If a study was deemed to have a high risk of bias in any domain, we noted this in the quality assessment summary and took it into account when interpreting the findings. Data extraction process and data items Data extraction was performed independently by two reviewers (J.P.L. and I.R.), with discrepancies resolved through discussion with a third reviewer (J.C.). The following information was extracted from each eligible study: first author’s name, year of publication, country and location of sampling, sample size (male/female), mean age and mean years of experience, oral health perceptions of health care professionals, type of assessment, and study funding. For nurse perceptions, some specific information was collected from the studies for comparison: knowledge of dental terms/oral health; previous training to provide oral health care, type of training and perceived need for additional training; oral health care activities performed and availability of supplies to perform such care; access to oral health care by an oral health professional, perceived barriers/difficulties; and importance placed on oral health/relationship of oral health to systemic health. We recognize that this review relied on self-reported data (e.g., surveys or interviews) to assess health care professionals’ perceptions and practices. While self-reported data are commonly used in research of this nature, they introduce a potential source of bias, such as social desirability bias, where respondents may report behaviors or attitudes, they believe are more socially acceptable or expected. Additionally, recall bias may influence the accuracy of self-reports, particularly when participants are asked to reflect on past experiences or behaviors. These limitations were considered when assessing the overall quality of the studies, and we critically discuss their potential impact on the findings in the subsequent sections. For data analysis, standard spreadsheet software (Microsoft Excel for Mac, version 16.50. Microsoft, Redmond, WA, United States) was used to extract data. Frequencies and percentages were used to describe categorical variables, while continuous variables were reported as mean ± standard deviation (SD) and range. Results 3.1 Study selection The online search strategy identified 2,091 potentially relevant publications. After removing duplicates, 1,455 articles were assessed for eligibility criteria and 1,359 were excluded after title and/or abstract review. Of the 96 articles assessed for eligibility through full paper review, one could not be retrieved and 60 were excluded, with reasons for exclusion detailed in . As a result, a final number of 35 observational studies were included for qualitative synthesis. The PRISMA plot is shown in . The inter-observer reliability of the full-text screening was considered substantial (kappa score = 0.614, 95% CI: 0.471–0.757) . 3.2 Studies characteristics A total of 6,179 participants, 4,219 women, and 554 men (1,406 did not report gender), from all 35 included studies were included in this systematic review . The calculated percentage of 88.4% of female participants corroborates the literature, where most caregivers were female. The sample included personnel directly involved in providing oral health care to residents of health facilities: mostly nurses, assistant nurses, qualified aides, non-qualified aides, and some articles categorized them only as caregivers or careers . Others included diverse populations such as occupational therapists, speech therapists, social workers, physiotherapists, nursing students, dental nurses, and dental hygienists . Of the 35 articles included, 9 articles (about 17%) were published before 2010 , with the oldest article published in 1999. All the remaining articles were published after 2010, and about 43% of them were published after 2015. The latest article was published in 2023. Most of the studies were conducted in care facilities for the older adult, with the exception of one study developed in a hospital setting . Several issues were considered in the case definition setting. Some studies addressed more than one issue: 29 studies searched barriers or difficulties felt when performing oral health care activities , 13 studies assessed the perceived importance of oral health care , 19 accessed oral health knowledge , 20 studies emphasized previous training received , and 24 the perceived need for training . The other 14 studies mentioned the access to oral health care by an oral health professional , 22 studies explored the oral health care activities performed , and 7 studies evaluate if supplies to perform such oral health care activities were available . The methods used to collect data on oral health care provided by caregivers varied between studies and some applied more than one. Questionnaires were used in 24 studies , semi-structured interviews in 8 studies , and a more systematic data collection approach using the NDCBS in 5 studies . Furthermore, studies were conducted in 17 countries worldwide: Turkey , Brazil , France , Switzerland , Taiwan , USA , Sweden , Chile , Malaysia , Finland , Australia , Iceland , Serbia , Japan , Netherlands , Norway and Belgium . Of note, no studies were performed in Africa. 3.3 Methodological quality of the included studies The methodological quality of the studies varied significantly, with most studies falling into the fair (31.4%, n = 11) or high (28.6%, n = 10) quality categories, and 7 studies rated as low quality . None of the included studies described and calculated the non-response rate (item 3). Studies mostly failed to identify confounding factors and to perform a subgroup or multivariable analysis taking them into account (51.4%, n = 18) (item 5) and to use a validated screening/measurement tool (88.6%, n = 31) (item 4). This presents a concern regarding the reliability and generalizability of the findings. The heterogeneity of the studies was also evident, as different data collection methods were used, including questionnaires (24 studies), semi-structured interviews (8 studies), and the Nursing Dental Coping Belief Scale (NDCBS) (5 studies). This methodological diversity complicates direct comparisons between studies and highlights the potential for bias introduced by the lack of standardization in measurement tools. Furthermore, the study populations varied widely, including different categories of health care professionals (e.g., nurses, aides, dental professionals) across various countries, settings (nursing homes vs. hospitals), and types of training (formal vs. informal). This variation in study design and execution calls for caution in interpreting the aggregated results and underscores the need for more standardized approaches in future research. 3.4 Synthesis of evidence 3.4.1 Nursing dental coping belief scale The Nursing Dental Coping Belief Scale (NDCBS), originally validated in the U.S. for male veterans , was adapted for use with health care professionals in nursing settings . The aim was to create an oral health care priority index that could be used in both hospital wards and specialized facilities. The instrument consists of a 28-item questionnaire covering four dimensions: internal locus of control (IL), external locus of control (EL), self-efficacy (SE), and oral health care beliefs (OHCB). Lower scores represent an individual’s positive DCB and strong belief in their ability and competence to influence oral health behaviors. Four of the included studies used the NDCBS. The scale measures four dimensions: internal locus of control, external locus of control, self-efficacy, and beliefs about oral health care. Studies using the NDCBS have found that nurses’ beliefs about their ability to influence oral health behavior were often overly optimistic, with many overestimating their knowledge and skills. However, their actual practice did not always support this self-assessment bias . In some studies, nurses with more formal training showed better beliefs about their competence . In constrast, while other studies showed that more extended work experience was paradoxically associated with poorer dental coping beliefs . The inconsistency of these findings points to the heterogeneity of carers’ perceptions, which personal attitudes, educational background, and workplace dynamics may influence. 3.4.2 Perceived oral care barriers The barriers or difficulties experienced by caregivers in providing oral health care to residents, which were mentioned in 29 of the included studies , were categorized into 3 groups: barriers related to the residents themselves, barriers related to the organization, and barriers related to the caregiver . In terms of barriers related to the residents themselves, lack of cooperation was the most frequently reported, in 15 studies . Negative attitudes, bad moods, cursing, and even physical violence are some of the challenging behaviors exhibited by the residents and reported by caregivers. Other barriers include residents’ lack of interest or motivation , residents’ critical illness or debility , and residents’ refusal of oral health care . Most caregivers report that they do not have time to provide oral hygiene to the residents . Lack of oral hygiene materials , lack of staff , and lack of regular on-site support from dental health professionals are also reported as organizational barriers. Caregivers also report not having adequate training or skills to provide oral health care . In addition, motives such as disgust or lack of association with the procedure , fear of causing harm , or lack of prioritization have also been reported as caregiver-related difficulties in providing oral health care. The variability in the nature and extent of these barriers across studies highlights the heterogeneity of care contexts and the complexity of addressing these challenges. 3.4.3 Training in providing oral health care shows the number and percentage of caregivers who received training in oral health care and the type of training received. In most studies, less than half of the caregivers reported receiving training in oral health care for the older adult . Unfortunately, not all of these studies evaluated the type of training received. Those that did so concluded that, in most cases, the training was informal or based on personal experience . However, in almost all studies that assessed the need for training, participants were interested in implementing training programs . This gap between the need for training and the actual provision of training reflects an important organizational barrier. It highlights the potential for improving nurse education to improve oral health care practice. 3.4.4 Oral health knowledge, importance given to oral health, and oral health care activities performed A total of 18 studies assessed oral health knowledge using different measures. However, the conclusions were consistent with low oral health knowledge. Gaps in oral health knowledge include beliefs that tooth loss is an inevitable part of aging or that caries is a communicable disease, and lack of information about periodontitis . In a single study , caregivers were highly educated in the theoretical context, but this wasn’t reflected in the oral hygiene of the older adult as observed by the mucosal and plaque index. Although oral health literacy is low, participants recognize the importance of providing oral health care to residents and are aware of the interaction of systemic diseases and medical treatments with oral disease and the well-being of the older adult . A total of 22 studies reported oral health activities performed by caregivers . The most common performed oral hygiene activity was tooth brushing followed by denture cleaning . Other activities such as rinsing the mouth with a mouthwash , removing dentures for sleep , cleaning the oral mucosa with a gauze in the absence of teeth , and flossing were also performed, although with a much lower frequency. While some caregivers confirmed that the necessary materials to provide oral health care were available in the facilities , others expressed concern about the lack of resources, such as toothbrushes . The heterogeneity of practice across studies and settings further complicates the interpretation of findings, as some studies reported caregivers performing multiple oral health tasks. In contrast, others focused primarily on brushing or denture care. Access to oral health care by an oral health professional was assessed in 14 of the included studies . Most staff support the availability of dental chairs or an on-site dentist with portable dental units and regular visits by oral health professionals . However, home visits are not followed up and regular check-ups in nursing homes are rare . Access to emergency care is a challenge, with reliance on local dentists and delays . Only one study mentioned regular oral health campaigns, where a dentist goes to the home care facility to examine the older adult . These variations highlight the contextual of care provision and the need for more robust infrastructure and support for carers in many settings. Study selection The online search strategy identified 2,091 potentially relevant publications. After removing duplicates, 1,455 articles were assessed for eligibility criteria and 1,359 were excluded after title and/or abstract review. Of the 96 articles assessed for eligibility through full paper review, one could not be retrieved and 60 were excluded, with reasons for exclusion detailed in . As a result, a final number of 35 observational studies were included for qualitative synthesis. The PRISMA plot is shown in . The inter-observer reliability of the full-text screening was considered substantial (kappa score = 0.614, 95% CI: 0.471–0.757) . Studies characteristics A total of 6,179 participants, 4,219 women, and 554 men (1,406 did not report gender), from all 35 included studies were included in this systematic review . The calculated percentage of 88.4% of female participants corroborates the literature, where most caregivers were female. The sample included personnel directly involved in providing oral health care to residents of health facilities: mostly nurses, assistant nurses, qualified aides, non-qualified aides, and some articles categorized them only as caregivers or careers . Others included diverse populations such as occupational therapists, speech therapists, social workers, physiotherapists, nursing students, dental nurses, and dental hygienists . Of the 35 articles included, 9 articles (about 17%) were published before 2010 , with the oldest article published in 1999. All the remaining articles were published after 2010, and about 43% of them were published after 2015. The latest article was published in 2023. Most of the studies were conducted in care facilities for the older adult, with the exception of one study developed in a hospital setting . Several issues were considered in the case definition setting. Some studies addressed more than one issue: 29 studies searched barriers or difficulties felt when performing oral health care activities , 13 studies assessed the perceived importance of oral health care , 19 accessed oral health knowledge , 20 studies emphasized previous training received , and 24 the perceived need for training . The other 14 studies mentioned the access to oral health care by an oral health professional , 22 studies explored the oral health care activities performed , and 7 studies evaluate if supplies to perform such oral health care activities were available . The methods used to collect data on oral health care provided by caregivers varied between studies and some applied more than one. Questionnaires were used in 24 studies , semi-structured interviews in 8 studies , and a more systematic data collection approach using the NDCBS in 5 studies . Furthermore, studies were conducted in 17 countries worldwide: Turkey , Brazil , France , Switzerland , Taiwan , USA , Sweden , Chile , Malaysia , Finland , Australia , Iceland , Serbia , Japan , Netherlands , Norway and Belgium . Of note, no studies were performed in Africa. Methodological quality of the included studies The methodological quality of the studies varied significantly, with most studies falling into the fair (31.4%, n = 11) or high (28.6%, n = 10) quality categories, and 7 studies rated as low quality . None of the included studies described and calculated the non-response rate (item 3). Studies mostly failed to identify confounding factors and to perform a subgroup or multivariable analysis taking them into account (51.4%, n = 18) (item 5) and to use a validated screening/measurement tool (88.6%, n = 31) (item 4). This presents a concern regarding the reliability and generalizability of the findings. The heterogeneity of the studies was also evident, as different data collection methods were used, including questionnaires (24 studies), semi-structured interviews (8 studies), and the Nursing Dental Coping Belief Scale (NDCBS) (5 studies). This methodological diversity complicates direct comparisons between studies and highlights the potential for bias introduced by the lack of standardization in measurement tools. Furthermore, the study populations varied widely, including different categories of health care professionals (e.g., nurses, aides, dental professionals) across various countries, settings (nursing homes vs. hospitals), and types of training (formal vs. informal). This variation in study design and execution calls for caution in interpreting the aggregated results and underscores the need for more standardized approaches in future research. Synthesis of evidence 3.4.1 Nursing dental coping belief scale The Nursing Dental Coping Belief Scale (NDCBS), originally validated in the U.S. for male veterans , was adapted for use with health care professionals in nursing settings . The aim was to create an oral health care priority index that could be used in both hospital wards and specialized facilities. The instrument consists of a 28-item questionnaire covering four dimensions: internal locus of control (IL), external locus of control (EL), self-efficacy (SE), and oral health care beliefs (OHCB). Lower scores represent an individual’s positive DCB and strong belief in their ability and competence to influence oral health behaviors. Four of the included studies used the NDCBS. The scale measures four dimensions: internal locus of control, external locus of control, self-efficacy, and beliefs about oral health care. Studies using the NDCBS have found that nurses’ beliefs about their ability to influence oral health behavior were often overly optimistic, with many overestimating their knowledge and skills. However, their actual practice did not always support this self-assessment bias . In some studies, nurses with more formal training showed better beliefs about their competence . In constrast, while other studies showed that more extended work experience was paradoxically associated with poorer dental coping beliefs . The inconsistency of these findings points to the heterogeneity of carers’ perceptions, which personal attitudes, educational background, and workplace dynamics may influence. 3.4.2 Perceived oral care barriers The barriers or difficulties experienced by caregivers in providing oral health care to residents, which were mentioned in 29 of the included studies , were categorized into 3 groups: barriers related to the residents themselves, barriers related to the organization, and barriers related to the caregiver . In terms of barriers related to the residents themselves, lack of cooperation was the most frequently reported, in 15 studies . Negative attitudes, bad moods, cursing, and even physical violence are some of the challenging behaviors exhibited by the residents and reported by caregivers. Other barriers include residents’ lack of interest or motivation , residents’ critical illness or debility , and residents’ refusal of oral health care . Most caregivers report that they do not have time to provide oral hygiene to the residents . Lack of oral hygiene materials , lack of staff , and lack of regular on-site support from dental health professionals are also reported as organizational barriers. Caregivers also report not having adequate training or skills to provide oral health care . In addition, motives such as disgust or lack of association with the procedure , fear of causing harm , or lack of prioritization have also been reported as caregiver-related difficulties in providing oral health care. The variability in the nature and extent of these barriers across studies highlights the heterogeneity of care contexts and the complexity of addressing these challenges. 3.4.3 Training in providing oral health care shows the number and percentage of caregivers who received training in oral health care and the type of training received. In most studies, less than half of the caregivers reported receiving training in oral health care for the older adult . Unfortunately, not all of these studies evaluated the type of training received. Those that did so concluded that, in most cases, the training was informal or based on personal experience . However, in almost all studies that assessed the need for training, participants were interested in implementing training programs . This gap between the need for training and the actual provision of training reflects an important organizational barrier. It highlights the potential for improving nurse education to improve oral health care practice. 3.4.4 Oral health knowledge, importance given to oral health, and oral health care activities performed A total of 18 studies assessed oral health knowledge using different measures. However, the conclusions were consistent with low oral health knowledge. Gaps in oral health knowledge include beliefs that tooth loss is an inevitable part of aging or that caries is a communicable disease, and lack of information about periodontitis . In a single study , caregivers were highly educated in the theoretical context, but this wasn’t reflected in the oral hygiene of the older adult as observed by the mucosal and plaque index. Although oral health literacy is low, participants recognize the importance of providing oral health care to residents and are aware of the interaction of systemic diseases and medical treatments with oral disease and the well-being of the older adult . A total of 22 studies reported oral health activities performed by caregivers . The most common performed oral hygiene activity was tooth brushing followed by denture cleaning . Other activities such as rinsing the mouth with a mouthwash , removing dentures for sleep , cleaning the oral mucosa with a gauze in the absence of teeth , and flossing were also performed, although with a much lower frequency. While some caregivers confirmed that the necessary materials to provide oral health care were available in the facilities , others expressed concern about the lack of resources, such as toothbrushes . The heterogeneity of practice across studies and settings further complicates the interpretation of findings, as some studies reported caregivers performing multiple oral health tasks. In contrast, others focused primarily on brushing or denture care. Access to oral health care by an oral health professional was assessed in 14 of the included studies . Most staff support the availability of dental chairs or an on-site dentist with portable dental units and regular visits by oral health professionals . However, home visits are not followed up and regular check-ups in nursing homes are rare . Access to emergency care is a challenge, with reliance on local dentists and delays . Only one study mentioned regular oral health campaigns, where a dentist goes to the home care facility to examine the older adult . These variations highlight the contextual of care provision and the need for more robust infrastructure and support for carers in many settings. Nursing dental coping belief scale The Nursing Dental Coping Belief Scale (NDCBS), originally validated in the U.S. for male veterans , was adapted for use with health care professionals in nursing settings . The aim was to create an oral health care priority index that could be used in both hospital wards and specialized facilities. The instrument consists of a 28-item questionnaire covering four dimensions: internal locus of control (IL), external locus of control (EL), self-efficacy (SE), and oral health care beliefs (OHCB). Lower scores represent an individual’s positive DCB and strong belief in their ability and competence to influence oral health behaviors. Four of the included studies used the NDCBS. The scale measures four dimensions: internal locus of control, external locus of control, self-efficacy, and beliefs about oral health care. Studies using the NDCBS have found that nurses’ beliefs about their ability to influence oral health behavior were often overly optimistic, with many overestimating their knowledge and skills. However, their actual practice did not always support this self-assessment bias . In some studies, nurses with more formal training showed better beliefs about their competence . In constrast, while other studies showed that more extended work experience was paradoxically associated with poorer dental coping beliefs . The inconsistency of these findings points to the heterogeneity of carers’ perceptions, which personal attitudes, educational background, and workplace dynamics may influence. Perceived oral care barriers The barriers or difficulties experienced by caregivers in providing oral health care to residents, which were mentioned in 29 of the included studies , were categorized into 3 groups: barriers related to the residents themselves, barriers related to the organization, and barriers related to the caregiver . In terms of barriers related to the residents themselves, lack of cooperation was the most frequently reported, in 15 studies . Negative attitudes, bad moods, cursing, and even physical violence are some of the challenging behaviors exhibited by the residents and reported by caregivers. Other barriers include residents’ lack of interest or motivation , residents’ critical illness or debility , and residents’ refusal of oral health care . Most caregivers report that they do not have time to provide oral hygiene to the residents . Lack of oral hygiene materials , lack of staff , and lack of regular on-site support from dental health professionals are also reported as organizational barriers. Caregivers also report not having adequate training or skills to provide oral health care . In addition, motives such as disgust or lack of association with the procedure , fear of causing harm , or lack of prioritization have also been reported as caregiver-related difficulties in providing oral health care. The variability in the nature and extent of these barriers across studies highlights the heterogeneity of care contexts and the complexity of addressing these challenges. Training in providing oral health care shows the number and percentage of caregivers who received training in oral health care and the type of training received. In most studies, less than half of the caregivers reported receiving training in oral health care for the older adult . Unfortunately, not all of these studies evaluated the type of training received. Those that did so concluded that, in most cases, the training was informal or based on personal experience . However, in almost all studies that assessed the need for training, participants were interested in implementing training programs . This gap between the need for training and the actual provision of training reflects an important organizational barrier. It highlights the potential for improving nurse education to improve oral health care practice. Oral health knowledge, importance given to oral health, and oral health care activities performed A total of 18 studies assessed oral health knowledge using different measures. However, the conclusions were consistent with low oral health knowledge. Gaps in oral health knowledge include beliefs that tooth loss is an inevitable part of aging or that caries is a communicable disease, and lack of information about periodontitis . In a single study , caregivers were highly educated in the theoretical context, but this wasn’t reflected in the oral hygiene of the older adult as observed by the mucosal and plaque index. Although oral health literacy is low, participants recognize the importance of providing oral health care to residents and are aware of the interaction of systemic diseases and medical treatments with oral disease and the well-being of the older adult . A total of 22 studies reported oral health activities performed by caregivers . The most common performed oral hygiene activity was tooth brushing followed by denture cleaning . Other activities such as rinsing the mouth with a mouthwash , removing dentures for sleep , cleaning the oral mucosa with a gauze in the absence of teeth , and flossing were also performed, although with a much lower frequency. While some caregivers confirmed that the necessary materials to provide oral health care were available in the facilities , others expressed concern about the lack of resources, such as toothbrushes . The heterogeneity of practice across studies and settings further complicates the interpretation of findings, as some studies reported caregivers performing multiple oral health tasks. In contrast, others focused primarily on brushing or denture care. Access to oral health care by an oral health professional was assessed in 14 of the included studies . Most staff support the availability of dental chairs or an on-site dentist with portable dental units and regular visits by oral health professionals . However, home visits are not followed up and regular check-ups in nursing homes are rare . Access to emergency care is a challenge, with reliance on local dentists and delays . Only one study mentioned regular oral health campaigns, where a dentist goes to the home care facility to examine the older adult . These variations highlight the contextual of care provision and the need for more robust infrastructure and support for carers in many settings. Discussion 4.1 Summary of main findings This systematic review provides an in-depth analysis of the oral health care challenges that carers of dependent older adults, face. It highlights several key issues: the gap between education and practice, the persistence of barriers to adequate oral health care, and lack of health literacy among carers. The reviewed studies show that although carers recognize the importance of oral health and its link to systemic health, their ability to provide adequate care is often troubled by insufficient formal training, inadequate resources, and organizational challenges. Caregivers were primarily involved in brushing teeth and cleaning dentures but were less likely to perform more complex oral health tasks. Furthermore, despite these challenges, carers demonstrated a strong awareness of the need for oral care in older people, although their knowledge of oral health practices and conditions remained limited. Results from studies using the Nursing Dental Coping Belief Scale (NDCBS) show a significant discrepancy between carers’ beliefs about their competence to provide oral care and the actual practices observed. Experienced carers often reported facing more challenges, possibly due to burnout or a mismatch between training and the demands of caring. The barriers identified across studies can be categorized into resident, organizational, and carer-related factors, each contributing to suboptimal oral health care. 4.2 Implications for practice and research The included studies showed that oral health care practices for dependent older adults are still inadequate, insufficient, and unsystematic. Although guidelines for appropriate oral health care exist , training in oral and prosthetic hygiene has been shown to have a positive impact, and various oral health training programs for care providers working in geriatric settings have been described in the literature . However, a systematic review of strategies to improve oral health care showed that there is still a need to improve the strategies used to change oral health care behaviors, as providing general information seems to be successful in increasing oral health knowledge but does not necessarily improve oral health . In addition, another systematic review showed that oral health education programs may indeed have a positive effect on oral hygiene in the older adult, although some limitations of the included studies were noted. Therefore, caregivers need structured training programs that improve their knowledge and equip them with the skills and resources to effectively perform daily oral health tasks. Training programs can be more effective if they are tailored to the specific needs of caregivers in different settings and focus on practical training. In addition, such training should be regularly updated to reflect advances in oral health care for older people and integrated into the routine activities of nursing homes and care facilities. Dental professionals must actively participate in training and provide ongoing support, as this significantly improves caregivers’ confidence and competence in delivering oral health care. In addition, the financial burden of dental care for nursing home residents remains a significant issue. Oral health care is often excluded from public health coverage, leaving residents to pay for treatment. This factor contributes to the neglect of oral health and increases the risk of significant oral disease. We must implement policy changes to integrate dental care into the broader health care framework for older people and provide financial support to reduce out-of-pocket costs for residents. 4.3 Strengths and limitations This systematic review was conducted according to PRISMA, a rigorous and widely recommended guideline that increases robustness and reduces reporting errors. In addition, an extensive literature search was conducted using a meticulous predefined protocol. However, there are some limitations that need to be discussed. Most studies used a convenience sample of nursing homes in the study area, so the results may have been different if the other facilities had been included in the studies. In addition, only a few health care professionals from each sample site participated in the surveys. As a result, the small sample size limits the ability to extrapolate the data to the rest of the population and the ability to detect small differences between groups as statistically significant. Another limitation is the reliance on self-reported data, particularly from questionnaires and interviews, which can introduce various forms of bias. Carers may be motivated to give socially desirable answers, overestimating their level of training or the quality of care they provide. Recall bias is also a concern, as caregivers may have difficulty accurately recalling specific events or practices related to oral health care. In addition, the heterogeneity of the studies—ranging from differences in data collection methods (e.g., questionnaires vs. interviews) to differences in study populations (e.g., type of caregiver, setting, geographic location)—makes it difficult to draw firm conclusions about the generalisability of the findings. The lack of standardized measurement tools across studies makes it difficult to compare results, especially for complex constructs such as oral health knowledge and caregiver self-efficacy. Future studies should focus on data representativeness and method standardization to ensure more homogeneous evidence-based results. The NDCBS is a standardized assessment tool that should be widely used. This information is extremely important for improving the oral health of nursing home residents and, consequently, their well-being and systemic health. It is also important for educating nursing home administrators about the improvements that can be made in oral health care. 4.4 Recommendations for overcoming barriers The findings of this review support the proposal of several actionable strategies to address the barriers to providing oral health care for older adults: Standardize training programs: Institutions can formalize nursing training, incorporating hands-on sessions that focus on practical aspects of oral health care, especially for non-dental professionals. These programs should be integrated into nurses’ induction processes and continuing education initiatives, ensuring they acquire and maintain up-to-date knowledge and skills. Improve access to resources: Facilities can ensure the availability of adequate oral health supplies, including toothbrushes, denture care products, and other essential materials. Regular efforts are needed to maintain the accessibility and readiness of these resources for staff use. Policy changes for financial support: Governments and health systems can extend dental care coverage for older people in long-term care facilities. This may involve incorporating dental services into existing health programs or creating separate funding for dental care for the older adult. Regular monitoring and support: Ongoing support from dental professionals should be integrated into the care routine for older residents, ensuring that carers have access to advice when needed. In addition, regular monitoring of oral health outcomes should be implemented to identify problems early and improve the overall quality of care. Summary of main findings This systematic review provides an in-depth analysis of the oral health care challenges that carers of dependent older adults, face. It highlights several key issues: the gap between education and practice, the persistence of barriers to adequate oral health care, and lack of health literacy among carers. The reviewed studies show that although carers recognize the importance of oral health and its link to systemic health, their ability to provide adequate care is often troubled by insufficient formal training, inadequate resources, and organizational challenges. Caregivers were primarily involved in brushing teeth and cleaning dentures but were less likely to perform more complex oral health tasks. Furthermore, despite these challenges, carers demonstrated a strong awareness of the need for oral care in older people, although their knowledge of oral health practices and conditions remained limited. Results from studies using the Nursing Dental Coping Belief Scale (NDCBS) show a significant discrepancy between carers’ beliefs about their competence to provide oral care and the actual practices observed. Experienced carers often reported facing more challenges, possibly due to burnout or a mismatch between training and the demands of caring. The barriers identified across studies can be categorized into resident, organizational, and carer-related factors, each contributing to suboptimal oral health care. Implications for practice and research The included studies showed that oral health care practices for dependent older adults are still inadequate, insufficient, and unsystematic. Although guidelines for appropriate oral health care exist , training in oral and prosthetic hygiene has been shown to have a positive impact, and various oral health training programs for care providers working in geriatric settings have been described in the literature . However, a systematic review of strategies to improve oral health care showed that there is still a need to improve the strategies used to change oral health care behaviors, as providing general information seems to be successful in increasing oral health knowledge but does not necessarily improve oral health . In addition, another systematic review showed that oral health education programs may indeed have a positive effect on oral hygiene in the older adult, although some limitations of the included studies were noted. Therefore, caregivers need structured training programs that improve their knowledge and equip them with the skills and resources to effectively perform daily oral health tasks. Training programs can be more effective if they are tailored to the specific needs of caregivers in different settings and focus on practical training. In addition, such training should be regularly updated to reflect advances in oral health care for older people and integrated into the routine activities of nursing homes and care facilities. Dental professionals must actively participate in training and provide ongoing support, as this significantly improves caregivers’ confidence and competence in delivering oral health care. In addition, the financial burden of dental care for nursing home residents remains a significant issue. Oral health care is often excluded from public health coverage, leaving residents to pay for treatment. This factor contributes to the neglect of oral health and increases the risk of significant oral disease. We must implement policy changes to integrate dental care into the broader health care framework for older people and provide financial support to reduce out-of-pocket costs for residents. Strengths and limitations This systematic review was conducted according to PRISMA, a rigorous and widely recommended guideline that increases robustness and reduces reporting errors. In addition, an extensive literature search was conducted using a meticulous predefined protocol. However, there are some limitations that need to be discussed. Most studies used a convenience sample of nursing homes in the study area, so the results may have been different if the other facilities had been included in the studies. In addition, only a few health care professionals from each sample site participated in the surveys. As a result, the small sample size limits the ability to extrapolate the data to the rest of the population and the ability to detect small differences between groups as statistically significant. Another limitation is the reliance on self-reported data, particularly from questionnaires and interviews, which can introduce various forms of bias. Carers may be motivated to give socially desirable answers, overestimating their level of training or the quality of care they provide. Recall bias is also a concern, as caregivers may have difficulty accurately recalling specific events or practices related to oral health care. In addition, the heterogeneity of the studies—ranging from differences in data collection methods (e.g., questionnaires vs. interviews) to differences in study populations (e.g., type of caregiver, setting, geographic location)—makes it difficult to draw firm conclusions about the generalisability of the findings. The lack of standardized measurement tools across studies makes it difficult to compare results, especially for complex constructs such as oral health knowledge and caregiver self-efficacy. Future studies should focus on data representativeness and method standardization to ensure more homogeneous evidence-based results. The NDCBS is a standardized assessment tool that should be widely used. This information is extremely important for improving the oral health of nursing home residents and, consequently, their well-being and systemic health. It is also important for educating nursing home administrators about the improvements that can be made in oral health care. Recommendations for overcoming barriers The findings of this review support the proposal of several actionable strategies to address the barriers to providing oral health care for older adults: Standardize training programs: Institutions can formalize nursing training, incorporating hands-on sessions that focus on practical aspects of oral health care, especially for non-dental professionals. These programs should be integrated into nurses’ induction processes and continuing education initiatives, ensuring they acquire and maintain up-to-date knowledge and skills. Improve access to resources: Facilities can ensure the availability of adequate oral health supplies, including toothbrushes, denture care products, and other essential materials. Regular efforts are needed to maintain the accessibility and readiness of these resources for staff use. Policy changes for financial support: Governments and health systems can extend dental care coverage for older people in long-term care facilities. This may involve incorporating dental services into existing health programs or creating separate funding for dental care for the older adult. Regular monitoring and support: Ongoing support from dental professionals should be integrated into the care routine for older residents, ensuring that carers have access to advice when needed. In addition, regular monitoring of oral health outcomes should be implemented to identify problems early and improve the overall quality of care. Conclusion This review highlights the multiple barriers to oral health care for dependent older adults, including time constraints, lack of training, inadequate resources, and poor collaboration among caregivers. In particular, caregiver training programs are often informal and experiential, while oral health literacy remains low, creating a critical gap in their ability to provide adequate care. The included studies’ methodological limitations, such as reliance on self-reported data and lack of standardized measures, highlight the need for more robust and standardized research designs. To address these challenges does not appear to be modifying the subject structured, evidence-based training programs for caregivers. These programs should be comprehensive, combine theoretical knowledge with practical skills, and directly address the barriers identified in this review. In addition, systemic changes are needed to ensure that older adult residents have financial access to dental care, often a significant barrier to optimal care. Future research should focus on overcoming the limitations of current studies by standardizing data collection methods and using validated instruments, such as the NDCBS, to ensure greater comparability between studies. Longitudinal studies or randomized controlled trials are essential to assess the effectiveness of different educational programs and interventions in improving oral health knowledge and clinical outcomes in older populations. Researchers must investigate the cost-effectiveness of integrating oral health care into long-term care and develop strategies to incentivize dental professionals to participate in routine care. They should also analyze the benefits of interdisciplinary care models that include nurses and dental professionals and evaluate how policy changes can improve access to dental care for older adults, especially in regions with limited public dental coverage. |
Autopsy case of cardiac mantle cell lymphoma presenting with recurrent pulmonary tumor embolism after chemotherapy | fc0d045f-a0cb-4490-a570-2b1903d4fd20 | 11528258 | Forensic Medicine[mh] | Mantle cell lymphoma (MCL) is a mature B-cell neoplasm consisting of small- to medium-sized cells with irregular nuclei. Classical MCL (cMCL) immunophenotypically expresses CD5, SOX11, and cyclin D1, and CCND1 :: IgH translocation is identified in most MCL cases. cMCL frequently presents with nodal/non-nodal lymphoid tissue, bone marrow, and gastrointestinal tract involvement; other extra-nodal sites include the kidney, skin, and central nervous system. MCL has two aggressive variants, namely blastoid-type and pleomorphic-type MCLs. Although aggressive MCL can be effectively treated with a dose-intensified regimen containing rituximab and high-dose cytarabine, the prognosis after autologous or allogeneic hematopoietic stem cell transplantation is suboptimal. , Therefore, clinical management of aggressive MCL is extremely difficult. Primary cardiac lymphoma (PCL) is an extremely rare malignancy, accounting for only 2% of primary cardiac tumors. PCL presents as pericardiac effusion, cardiac mass, and direct myocardial invasion without other lymphoma lesions, and right-sided masses are predominantly reported. – Secondary cardiac involvement at diagnosis or relapse of lymphoma is more frequent than PCL , and is also identified in 8.7% to 25% of autopsy cases. , Although most PCLs are diffuse large B-cell lymphomas (DLBCLs), , secondary cardiac involvement can also be seen in DLBCL and other lymphoma subtypes, such as Burkitt lymphoma, , high-grade B-cell lymphoma, , follicular lymphoma, small B-cell lymphoma, T-cell lymphoma, and NK/T-cell lymphoma. , In a Japanese single-institute retrospective analysis, echocardiography (UCG) identified six cases (1.5%) of secondary cardiac involvement. There were five cases of DLBCL and one case of B-cell lymphoma; however, MCL with secondary cardiac involvement was not identified on UCG. These results suggest that cardiac involvement in MCL is extremely rare. Two cases of MCL with secondary cardiac involvement have recently been reported. , One case presented with pericardial effusion with cervical lymph adenopathy, and improved after intensive chemotherapy. The other case presented with a large mass lesion in the right atrium (RA) with multiple lymphadenopathy and splenomegaly. The patient died of cardiogenic shock after surgical resection of the RA mass; however, the official cause of death was not identified. Pulmonary embolism (PE) is a possible cause of death in cardiac lymphoma involving the RA; however, direct lymphoma embolism of MCL in the RA has not been reported. Therefore, the optimal treatment for MCL in RA and the prevention of PE has not yet been established. Here, we report a case of pleomorphic-type MCL complicated by recurrent PE from a lymphoma lesion in the RA after chemotherapy that resulted in death. The patient was a 78-year-old Japanese man with a medical history of gastric ulcers, depression, diabetes, left brachial artery stenosis, and anterior neck abscess. He had been experiencing a sore throat and fever for approximately 3 weeks and was admitted to the emergency department because of his worsening sore throat, fever, anorexia, lower limb edema, and immobility. His admission vital signs were as follows: heart rate, 100/min; blood pressure, 130/71 mmHg; and oxygen saturation, 98% in ambient air. Physical examination revealed swelling of the right tonsil, right cervical lymphadenopathy, and marked bilateral edema of the lower body. Blood tests revealed liver and renal dysfunction, abnormal coagulation, and increased C-reactive protein and soluble interleukin 2 receptor levels . An electrocardiogram showed sinus tachycardia. Contrast-enhanced computed tomography (CT) showed multiple lymphadenopathies in the right pharynx and neck . A tumor was identified in the right adrenal gland, which extended from the inferior vena cava (IVC) to the RA ; however, PE was not identified. Tumor invasion of the hepatic vein caused congestion in the right lobe of the liver . UCG demonstrated that the left ventricular ejection fraction was maintained at 57%; however, the mass lesion (4.0×2.0×3.5 cm) in the RA was mobile and extended to the right ventricle via the tricuspid valve . Right tonsil biopsy revealed a diffuse proliferation of medium-to-large-sized lymphoid cells with a round-to-irregular-shaped nucleus containing distinct nucleoli. Necrosis was present . In immunohistochemistry, tumor cells were positive for CD20 . The patient was then diagnosed with aggressive B-cell lymphoma with extranodal involvement. Bone marrow examination revealed no lymphoma cell invasion. Whether the cardiac tumor was lymphoma or not was unclear, and we were concerned about the risk of tumor embolism after starting treatment. Therefore, a cardiologist was consulted, and surgical resection was recommended to avoid PE after treatment. However, during the work-up and discussion with the cardiologist, the patient rapidly progressed and developed multiple organ failure, suggesting that systemic lymphoma disease activity had worsened. Thus, we regarded that the patient could not tolerate surgical resection. Although the diagnosis of cardiac tumor had not been made, we clinically judged that abnormal mass lesions other than those in the lymph nodes might be derived from malignant lymphoma. Based on this evidence, we discussed the treatment plan with the patient and finally decided to start chemotherapy to improve his critical condition. We immediately started R-CHOP therapy (cyclophosphamide 750 mg/m 2 on day 1, doxorubicin 50 mg/m 2 on day 1, vincristine 1.4 mg/m 2 on day 1, prednisolone 100 mg on day 1 to day 5, and rituximab 375 mg/m 2 day 6). However, on the 5 th day of chemotherapy, the patient developed dyspnea and markedly decreased oxygenation. Contrast-enhanced CT revealed multiple peripheral PEs ; however, anticoagulation could not be initiated due to hematuria. On the 9 th day of chemotherapy, the patient’s respiration deteriorated, and CT revealed bronchiolitis in the left lower lobe of the lung. On the 13 th day of chemotherapy, he developed vital shock and required noradrenaline. Contrast-enhanced CT revealed new PEs . The final pathological report revealed that the tumor cells were positive for cyclin D1 and SOX11 ; partially positive for PAX-5; and negative for CD5, BCL-2, CD3, CD10, CD21, and CD23. The Ki-67 index was > 90% . Split fluorescence in situ hybridization (FISH) analysis targeting CCND1 was positive (8%) and split FISH targeting Myc was negative, suggesting a diagnosis of pleomorphic-type MCL. The patient’s condition was refractory to R-CHOP; hence, we attempted BR therapy (rituximab 375 mg/m 2 on day 1 and bendamustine 60 mg/m 2 on days 1 and 2). The patient eventually died due to respiratory failure. Autopsy results confirmed that the pleomorphic-type MCL had invaded the right palatine tonsil, right submandibular lymph node, right adrenal gland, and IVC . IVC obstruction due to lymphoma can cause hepatic, renal, and splenic congestion. The pleomorphic-type MCL infiltrated multiple pulmonary arteries, resulting in pulmonary infarcts . Autopsy confirmed that the patient died of multiple PEs caused by lymphoma lysis in the RA after chemotherapy. We encountered a case of pleomorphic-type MCL complicated by recurrent PE after chemotherapy. The patient died of respiratory failure due to multiple PEs. An autopsy confirmed tumor embolism of the pleomorphic-type MCL. Our case suggests the difficulty of treating MCL involving RA. Therefore, we discuss the prevention of posttreatment PE and the potential usefulness of surgical resection for RA lymphoma before chemotherapy. An early case series involving 35 patients with PCL identified that most PCLs were associated with RA. Treatments were heterogeneous and included surgery, chemotherapy, radiotherapy, and combined therapy. Surgical resection for cardiac lymphomas is performed for various reasons including the diagnosis of lymphoma, which is essential in cases of cardiac mass without other lesions, and the treatment of cardiac lymphoma by complete resection. However, because complete surgical resection of PCL is sometimes difficult, and patients who do not receive chemotherapy can relapse after surgery, surgery to cure PCL is not routinely performed. The third reason is to improve significant or life-threatening cardiopulmonary complications, including superior vena cava (SVC) syndrome, rapidly progressing heart failure, and hemodynamic instability. , A previous study reviewed 44 cases of newly diagnosed and/or relapsed secondary cardiac lymphoma with a focus on secondary cardiac involvement of malignant lymphoma. Similar to PCL, DLBCL is a major subtype, and RA is the most frequently involved (40.5%) in secondary cardiac lymphoma. The cases with RA lymphomas presented with sick sinus syndrome, , complete atrioventricular block, cardiopulmonary instability, , and SVC syndrome. Most cases were treated by chemotherapy; however, surgical resection was performed in a few cases in the review. These observations demonstrate that the clinical presentation and lymphoma subtypes of secondary cardiac lymphoma are similar to those of PCL. Chemotherapy is preferred for all patients, while surgery is indicated only for cases with fatal arrhythmia and/or cardiopulmonary instability. We introduced chemotherapy as our patient did not present with hemodynamic instability and fatal arrhythmia. However, he developed MCL-derived PEs after chemoimmunotherapy and died. Therefore, we need to discuss who would benefit from surgery and how to prevent fatal posttreatment complications in cases with secondary cardiac invasion in RA. To clarify the clinical course and management of Japanese patients with secondary cardiac lymphoma involvement, we reviewed the previous literature and found 12 cases with secondary but not primary cardiac lymphoma. The case details are summarized in . , – Most cases had DLBCL histology. Surgery was performed before chemotherapy in six of the seven patients who underwent surgery. One patient underwent surgery for cardiogenic shock due to concomitant PE. In the other cases, the main reason for surgery was to prevent PE , or hemodynamic instability due to tricuspid valve obstruction. – As for survival outcomes, two of the five patients who received only chemotherapy died of lymphoma progression and PE (one case of each). In contrast, all seven patients who received surgery combined with chemotherapy survived. We found four additional cases of secondary cardiac MCL whose clinical characteristics were available in the literature. , , These are summarized in . Case 1 had pericardial invasions, Case 2 had myocardial invasion, and Cases 3 and 4 had mass lesions in the RA. Case 1 had cervical lymph node swelling, a bulky pelvic lesion, and pericardial effusion. The patient presented with cardiac tamponade and was intubated in the intensive care unit and markedly recovered through chemotherapy (rituximab, cyclophosphamide, vincristine, doxorubicin, and dexamethasone). Case 2 had multiple lymphadenopathy (axillary, retroperitoneal, pelvic, and inguinal), moderate splenomegaly, peripheral blood, and direct cardiomyocyte invasion. Case 3 had multiple lymphadenopathy (axillary, retroperitoneal, pelvic, and inguinal), splenomegaly, peripheral blood, bone marrow, and a large mass (43 mm × 32 mm) in the RA. BR therapy was started, but the patient’s hemodynamic condition worsened soon after treatment. Emergency surgery to remove the cardiac mass lesions was performed, and pathological analysis of the resected cardiac mass revealed cardiac involvement of MCL. However, the patient died on the second postoperative day. These outcomes raise several important concerns regarding the risk of PE after chemotherapy in cardiac MCL and the identification of patients who will majorly benefit from preventive surgery before chemotherapy. First, the high mobility of the RA tumor might be a useful surrogate indication for PE-preventing surgery before chemotherapy. As shown in , not all DLBCL patients who have RA lesions undergo preventive surgery before chemotherapy, suggesting that there might be predictors of PE after starting chemotherapy in RA lymphoma. The decision to perform surgery is based on the mobility of cardiac tumors. , Immobile cardiac lymphoma can be successfully treated by chemotherapy without PE or right heart obstruction. The surgical resection of tumors with increased mobility after chemotherapy also successfully prevents PE occurrence or hemodynamical instability. These observations demonstrate that unstable hemodynamic situations and cardiac tumor mobility might be surrogate indications for surgical resection of secondary cardiac lymphoma before starting chemotherapy. Supporting this hypothesis, preventive surgical resection before chemotherapy efficiently avoids posttreatment PE, as shown in the case series. In our case, UCG revealed high mobility of the RA tumor before chemotherapy; therefore, preventive surgery may have been beneficial before the initiation of chemotherapy. As such, monitoring the increased mobility of cardiac tumors might be useful to determine the optimal timing of surgery. Second, the lymphoma subtype might affect posttreatment responses in cardiac lymphoma lesions. Most RA lymphomas are DLBCL, and chemotherapy may only cure DLBCL-type PCL , , and DLBCL-type secondary cardiac lymphoma. , , Similar to DLBCL-type cardiac lymphoma, MCL-type cardiac lymphomas without cardiac mass could be treated by chemotherapy only, as shown in Cases 1 and 2 . In contrast, the two patients with MCL-type cardiac lymphoma with RA lesions died when only chemotherapy, and not preventive surgery before treatment, was used (Cases 3 and 4 in ). These findings suggest that MCL without RA lesions can be safely treated by chemotherapy alone. However, there is a higher possibility of PE and/or hemodynamic instability after chemotherapy in MCL with RA lesions than in DLBCL-type cardiac lymphoma. It should be noted, cases of MCL with RA lesions are extremely rare, thus limiting our study. Therefore, the effect of histological subtypes, such as MCL or DLBCL, on the risk of PE after chemotherapy should be clarified in future investigations. Third, we reviewed if tumor size was related to the benefit of preventive surgery. A patient with a large tumor size survived after chemotherapy only (Case 3 in ). This might be explained as preventive surgery efficiently avoiding cardiopulmonary complications during the early posttreatment phase. However, considering that patients with large cardiac lesions over 3 cm mainly undergo preventive surgery before chemotherapy, we could not rule out the possibility that tumor size might be positively correlated with PE or death. Our report sheds light not only the potential usefulness of preventive surgery before chemotherapy for MCL invading the RA, but also on the possibility of different posttreatment responses depending on the lymphoma subtype. However, there are some limitations to its interpretation. Most secondary cardiac lymphomas in Japan are DLBCL, and cardiac MCL with RA lesions is extremely rare. Therefore, we could not draw a definitive conclusion regarding patients who will benefit from preventive surgery among those with cardiac MCL with RA lesions. In addition, there might be undetermined risk factors in preventive surgery before chemotherapy (e.g., cardiac tumor size at diagnosis). Chemotherapy regimens also affect the treatment outcome of cardiac malignant lymphoma. Accumulation of more similar cases would help to build the criteria for identifying patients who need preventive surgery before chemotherapy. In conclusion, we report a case of pleomorphic-type MCL that preferentially invaded the large vessels and RA and presented with recurrent PE after chemotherapy. The high mobility of cardiac tumors of MCL in RA may pose a risk for fatal PE after chemotherapy; therefore, preventive surgery before chemotherapy should be considered to avoid PE. Hematologists who treat secondary cardiac lymphoma in RA should closely communicate with cardiologists who evaluate the mobility of cardiac tumors by UCG and discuss the necessity of surgery before chemotherapy. |
Pilomatrix carcinoma of the lacrimal caruncle: a case
report | f432506a-ea9f-42e0-9022-fca72cd09dcd | 11826732 | Surgical Procedures, Operative[mh] | Pilomatricoma is a rare, benign, slow-growing dermal or subcutaneous tumor that occurs most commonly in the head, neck, extremities, and trunk . In 1949, Lever and Griesemer proposed that this tumor originates from hair matrix cells. In 1961, Forbis and Helwig pro posed the currently accepted name of “pilomatricoma”. In 1980, Lopansri and Mihm identified malignant trans formations in these tumor cells, and they described these transformations as “pilomatrix carcinoma” (PMC) or “calcifying epitheliocarcinoma of Malherbe.” In the present report, we describe the clinical and histological features in a very rare case of malignant hair follicle tumor involving the conjunctiva. To the best of our knowledge, this is the first report on the PMC of the ocular surface. A 45-year-old man presented with a nontender enlarged mass measuring approximately 15 mm × 15 mm that had grown from the lacrimal caruncle of the right eye for more than three months . He did not complain of pain or discharge from the mass. An incisional biopsy had been performed one month prior by another specialist; the histopathology report showed the presence of basal cell carcinoma. The patient had no notable ophthalmic or medical history. During ocular examination, visual acuity in both eyes was 10/10 (decimal scale), and extraocular movements were normal. The surface of the pinkish mass was smooth without visible vessels, and the well-demarcated lesion was not connected to the surrounding skin. No other lesions were present on the eyelid. The results of the anterior and posterior ocular examinations were normal. Moreover, the patient’s general physical examination results were normal, and there was no evidence of preauricular or submandibular lymphadenopathy. The patient underwent surgical excision and topical chemotherapy. Under general anesthesia, the mass and a 2 mm safety margin were completely excised and sent for histopathological examination. Double freeze-thaw cryotherapy was applied to the conjunctival borders and the base of the mass. The large conjunctival defect was reconstructed with one sheet of amniotic membrane allograft secured with polyglactin sutures. Histopathological examinations revealed the subepithelial irregular infiltration of basaloid cells with hyperchromatic, ovoid, and vesicular nuclei, as well as limited cytoplasm in a trabecular or nested pattern . Some nests of the basaloid cells showed central keratinization . In some areas, enucleated “ghost” or “shadow” cells were present with eosinophilic cytoplasm. These cells appeared to have merged with basaloid cell groups ( and ). Focal giant cell reactions were detected, and no calcification was present. The invasion of the surrounding tissue was observed in desmoplastic stroma . There were no retraction artifacts between the basaloid cells and stroma. Mitoses were frequently observed (average 20-25 per 10 high-power fields in basaloid areas). No definite vascular or lymphatic permeations were identified. The Ki67 proliferation index was 80%. Immunohistochemical analysis showed diffuse positive reactions for Ber-EP4; however, there were no positive reactions for EMA, p63, S100, CD56, or CK20. A diagnosis of PMC was made on the basis of patient age, tumor localization, and histomorphological findings. The surgical margins were free of tumor cells. The results of complete blood cell count, renal and liver function tests, chest x-ray, and neck and abdomen ultrasound investigations were normal. The patient provided informed consent for further treatment; thus, we administered bevacizumab (25 mg/mL, 1.25 mg/mL per drop) eye drops four times per day for three months to prevent recurrence after surgery. At the one-year follow-up after initial excision, the patient showed no evidence of local recurrence or distant metastasis. The patient continues to be closely monitored ( and ). The most common sites for PMCs are the head and neck, followed by the upper extremities, trunk, and lower extremities . In a review by Sia et al. , 6 cases were reported to be PMCs among the 16 cases involving hair follicle malignancies in the periorbital region. In that review, the upper lid was the most commonly affected site by malignant hair follicle tumors, followed by the lower lid, eyebrow, and medial canthal region. The lacrimal caruncle has a nonkeratinized epithelial lining that is similar to the conjunctival epithelium . Developmentally, it constitutes a part of the lower lid; therefore, it contains hair follicles, sebaceous glands, and sweat glands. It also harbors accessory lacrimal tissue. Hence, neoplasms that may arise from the skin, conjunctiva, and lacrimal gland may develop in the lacrimal caruncle . None of the cases identified in the PubMed database reported on the primary hair follicle tumor of the lacrimal caruncle. Conjunctival involvement is not a typical finding in hair follicle malignancies. Lee et al. reported a case of trichilemmal carcinoma of the upper eyelid in a 51-year-old man. They reported that the mass completely penetrated the inner side of the upper lid and was present on the conjunctival side during slit-lamp examination. To the best of our knowledge, the present report is the first description of the PMC of the ocular surface involving the conjunctiva. In our case, PMC was originally misdiagnosed as basal cell carcinoma after incisional biopsy; it was later diagnosed as PMC on the basis of the excised specimen. This misdiagnosis was caused by the lack of clear histologic criteria and the lack of a specific marker to distinguish this neoplasm from other matrical tumors . Histopathologically, PMCs can be distinguished from benign pilomatricoma, trichoblastic carcinoma, and basal cell carcinoma owing to matrical differentiation. Basal cell carcinoma presents with infiltrating islands of palisading basaloid cells with shadow cells. Histological examination can be challenging, and malignant pilomatricoma with many basophilic basal cells is often mistaken for basal cell carcinoma . Differentiation is dependent upon the observation of retraction spaces between neoplastic cells and stroma . In the present case, there were no retraction artifacts between basaloid cells and stroma. Furthermore, proliferating pilomatricoma, which is a pathological variant of pilomatricoma that consists predominantly of mitotically active basaloid cells, should be included in the differential diagnosis. In such cases, poor circumscription, an asymmetrical appearance, atypical mitoses, and lymphovascular invasion are features that favor a diagnosis of PMC . PMC can exhibit local aggressive behavior with a tendency toward recurrence. Thus, we administered topical chemotherapy to prevent recurrence even though the surgical margins were free of tumor cells. To the best of our knowledge, this is the first reported case of PMC of the ocular surface; therefore, there is no standard topical chemotherapy protocol for similar cases. It has recently been reported that topical bevacizumab may reduce tumor size prior to surgery or may completely cure the tumor in ocular surface squamous neoplasia . Additionally, the systemic application of bevacizumab is a treatment option in PMC when local recurrence and distant metastasis are detected . Although the use of topical bevacizumab as an adjuvant therapy after the surgical treatment of ocular surface neoplasms is not a proven treatment option, it is regarded a good and safe option on the basis of the literature described above. We have not observed any side effects after three months of continuous treatment with topical bevacizumab. Local recurrences and metastatic disease have been documented after the simple excision of these types of lesions . Therefore, the creation of a sufficiently wide excision with a tumor-free margin is important during histopathological examination. In our case, after considering the clinical appearance of surgical margins and the results of the previous biopsy, we concluded that the resection of the tumor with a 2 mm margin of safety would ensure its complete removal. A wider excision could have been made after pathological examination, but there was a high risk of inadvertently damaging the canalicular system because of the proximity of the primary lesion to this system. Furthermore, the patient would have been required to undergo general anesthesia because he could not tolerate surgery under local anesthesia, and it would have been more difficult to make precise excisions because the tumor margin would be obscured following two previous surgeries in the same area. The patient was informed of the current situation and decided not to undergo a second surgical procedure. Consequently, a second surgery was not performed. Topical chemotherapy was initiated, and the patient underwent continuous monitoring. This case highlights the rare potential for conjunctival lesions to have unusual origins with potentially serious consequences. To make accurate excisions during the initial surgery, frozen section examinations should be performed during surgery, and differential diagnosis should be made with the inclusion of conditions such as PMC. |
Autoantibody test for type 1 diabetes in children: are there reasons to implement a screening program in the general population? A statement endorsed by the Italian Society for Paediatric Endocrinology and Diabetes (SIEDP-ISPED) and the Italian Society of Paediatrics (SIP) | 68d46f5a-28ce-4096-9df6-b160990f25c1 | 10354886 | Pediatrics[mh] | Type 1 diabetes mellitus (T1D) is one of the most frequent chronic diseases in children; it is due to an autoimmune destruction of the insulin-producing β-cells in the islets of Langerhans within the pancreas . Patients with T1D lose blood glucose control, which can result in both acute conditions (ketoacidosis and severe hypoglycemia) and chronic complications (retinopathy, nephropathy, neuropathy, and cardiovascular diseases). The pathogenesis of the T1D involves environmental factors (e.g. enteroviral infection) and polygenic predisposition. The incidence of T1D has increased dramatically over the last five decades, especially in children younger than five years . Those under the age of 18 years are most often affected, but an equal number of adults over 18 are thought to develop the disease. Currently there is no cure to T1D, therefore the patients are destined for a lifelong insulin treatment and, in most cases, the development of disease-related complications. Additionally, T1D has a huge economic burden on the patients, their families and the health systems globally . Numerous studies have shown that new technologies improve glycaemic control and long-term outcomes in children and adolescents with type 1 diabetes. Furthermore, these devices have improved quality of life and have patient satisfaction . Here we discuss the main unmet needs in type 1 diabetes and the opportunity presented by population screening. New drugs and technologies promise improvements in the care of patients with type 1 diabetes; however, some major unmet needs remain on the table, including the high frequency of DKA, the continued threat of hypoglycemia, the day-to-day burden of diabetes management, and failure to achieve optimal glycaemic control. A recent international study showed that the frequency of DKA at diabetes diagnosis was between 20.7% and 48.7% in the years 2006–2019 with a huge increase during the COVID-19 pandemic, exceeding 55% of cases . DKA is a clinical emergency associated with serious complications including cerebral edema, increased mortality rates, prolonged hospital stays, excessive costs, and poor long-term metabolic control . A single episode of moderate/severe DKA in young children at diagnosis is sufficient to cause cognitive impairment and impaired brain growth . The presence of DKA suggests delayed or unrecognized symptoms by parents or caregivers. DKA awareness campaigns are effective in reducing the frequency of DKA at the clinical onset of type 1 diabetes in children and adolescents . However, large-scale implementation of prevention campaigns requires considerable effort and their diffusion is still limited today. Very interesting results on the reduction of DKA at the diagnosis of diabetes have been reported by the screening program of the general population using anti-beta-cell autoantibodies . Hospitalization rates in children with diabetes are at least three times higher than in the general paediatric population, regardless of the presence of diabetic ketoacidosis at diabetes diagnosis, which is known to increase the likelihood of hospitalization . Additionally, diabetic ketoacidosis can lead to hospitalizations after a diagnosis of diabetes. Indeed, in the first 12 months after a diabetes diagnosis, more than 1 in 20 children require rehospitalization for ketoacidosis . Diabetes management is challenging and often overwhelming for young people with type 1 diabetes and their caregivers. Young people with diabetes appear to have a greater incidence of depression, anxiety, psychological distress and eating disorders compared to their healthy peers . Optimal treatment requires a young person with T1D and his or her family to monitor dietary intake, count carbohydrates, monitor trends in daily glucose values with a sensor or capillaries multiple times a day, deliver insulin multiple times a day with a pen or a pump . Advanced technologies and treatments can also add burdens, which may cause young people to stop wearing them. Such treatment burdens have the potential to significantly impact quality of life. Furthermore, lack of diabetes control and adequate insulin therapy often promotes diabetes-related family conflict, poor academic performance, and/or increased interpersonal conflict . Advances in insulin therapy, including the development of next-generation insulin analogues, targeted delivery approaches, continuous glucose monitoring (CGM), and automated insulin delivery systems have contributed to improvements in glucose control. However, for most people with type 1 diabetes, long-term glycaemic control remains suboptimal . People with type 1 diabetes face and manage the acute and life-threatening threat of hypoglycemia on a daily basis . A few months after disease onset, counterregulatory mechanisms, including a physiological decrease in insulin level and glucagon secretion, are lost . Also, over time, people with diabetes may experience a decrease or complete disappearance of hypoglycaemia symptoms. Preventing hypoglycemia is difficult and requires overcoming barriers such as emotional (e.g. fear of gaining weight), educational (e.g. choice of treatment), action planning (rescue treatment at hand), and social factors (fear of attracting unwanted attention). While new drugs and technology available today allow for the reduction of hypoglycaemia, there remains a strong need for treatments that lower blood sugar to the desired target without causing hypoglycemia or weight gain. On the other hand, fear of hypoglycaemia remains one of the main factors limiting the achievement of optimal glycaemic control. The onset of clinical type 1 diabetes is preceded by a long non-symptomatic prodromal period characterized by well-defined stages, which allow the progression towards the symptomatic disease, defined as stage 3, to be predicted . In stage 1, individuals have two or more beta-cell autoantibodies with normal blood sugar, in stage 2, two or more autoantibodies and dysglycemia or glucose intolerance. Given the reduced level of β-cell numbers at the time of diagnosis, the ability to stage type 1 diabetes before clinical onset presents an opportunity to preserve functional residual β-cell mass and prevent the onset of clinical symptoms . The islet-specific autoantibodies are anti-insulin antibodies (IAA), glutamate decarboxylase (GAD), islet antigen 2 (IA-2) and islet-specific zinc transporter (ZnT8). Children with two or more islet autoantibodies in stage 1 have a 5-year risk of clinical T1D of 44%, and a 15-year risk of 80-90%; children with two or more islet autoantibodies in stage 2 have a 5-year risk of clinical T1D of 75% and a lifetime risk of 100% . A child with only one islet autoantibody should be also followed up since could be transient or could develop other autoantibodies and then clinical T1D. Antibody screening has been used extensively in first-degree relatives of patients with type 1 diabetes (siblings, children, parents), including the TrialNet study, which identified potential subjects for prevention studies and provided information on the natural history of the disease . However, it is known that nearly 90% of children with newly diagnosed T1D have no family history of type 1 diabetes, so simply screening this population misses many cases. There are many reasons to suggest population screening in Europe and Italy. First, a large number of children have diabetic ketoacidosis at the diagnosis of type 1 diabetes, this number of patients has dramatically increased during the COVID-19 pandemic both in Italy and in the rest of the world . DKA is a serious and life-threatening event associated with short- and long-term sequelae, including significant neurocognitive outcomes, shorter remission phase, lower C-peptide reserve, worse glycaemic control, increased risk of vascular complications, and costs. Secondly, the screening is cost-effectively. Early detection of T1D in children might possibly reduce the risk or even prevent the deterioration of metabolic function. This would eventually decrease the risk of long-term complications, including brain damage associated with hyperglycaemia and hypoglycaemia, as well as vascular complications. In fact, the analysis of two databases from Sweden, The Swedish Paediatric Diabetes Quality Registry (SWEDIABKIDS) and the Swedish National Diabetes Registry (NDR), found that patients with better metabolic control at the time of stage 3 clinical T1D diagnosis had better metabolic control later in adult life . Other studies have confirmed that lower HbA1c values at diagnosis and early preservation of C-peptide reserve are associated with better metabolic control later in life and reduced risk of long-term complications . In addition, early screening for T1D in children could become cost-effective due to cheaper antibody screening methods, prevention of DKA hospitalization and the expected reduction in the incidence and economic impact of diabetes complications . In fact, the Fr1da study aiming to screen 200,000 children aged 3–4 years showed that DKA prevention in about 200 patients may cover a third of the study cost . In Colorado, patients with DKA at diagnosis have had HbA1c 1.4% higher than in those without DKA, for up to 15 years after diagnosis . Moreover, the Autoimmunity Screening for Kids (ASK) has demonstrated that prevention of DKA at diagnosis, combined with persistently lower HbA1c in patients without DKA and reduced incidence of diabetes complications, makes general population screening cost-effective . Thirdly, it should also not forget that early diagnosis of stage 1 or stage 2 T1D could offer children and their families an opportunity to participate in clinical trials, with the aim of delay the clinical manifestations of the disease. There are several trials available in Europe, USA and elsewhere in the world and some drugs have shown promising results in postponing the progression to clinical T1D . In individuals with a first-degree relative with T1D, one of these drugs (teplizumab, an anti-CD3 monoclonal antibodies) has shown to prolong a diabetes-free time of up to 6 years; this drug has been approved by the Food and Drug Administration (FDA) on November 2022 and it should be soon available in clinical practice. The advantage of knowing in advance the possibility that a child has T1D and delaying the diagnosis with a drug must be balanced with the anxiety that this information produces in families, and with the efforts and organizational costs that screening requires (Table ). It has been argued that screening for T1D could induce considerable psychological stress in children diagnosed with pre-symptomatic T1D and their parents (either at risk or in the general population). Natural history studies that have monitored children positive for islet autoantibodies have reported that parental distress was moderately increased, but returned to baseline levels with an appropriate education and monitoring. Islet autoantibody screening and diagnosis of pre-symptomatic T1D appear unlikely to induce parental psychological stress, which is comparable to that observed in families of children diagnosed with clinical T1D. Data from the Fr1da Study have shown that, when appropriately informed and educated, parents and families of children with two or more autoantibodies had positive feelings toward an early identification of T1D . There has been a change towards a more screening-friendly position in recent years, in part because screening for multiple diseases is now possible with broad genetic testing such as exome sequencing. Recently, the European Society of Pediatric Endocrinology (ESPE) approved a Position Statement on Screening for T1D in the general population, hoping that other countries could also support these wishes . The Italian Society of Pediatric Endocrinology and the Italian Society of Pediatrics approved and endorsed this Position Statement, in the meetings of 24 October 2022 and 18 January 2023, respectively. Health authorities would like to see if there is a clear added value, health benefit, and low burden of diagnosing asymptomatic diseases; effective medical care should be available that at least partly prevents or delays symptomatic disease and reduces complications; screening and monitoring need to be cost effective and competitive with respect to other health needs and priorities; and there must be convincing evidence that the false-positive rate is low and identification of false-positive cases does not cause relevant harm. How does screening for beta-cell autoimmunity meet these criteria? Ongoing and future studies will provide information on the prevalence of asymptomatic beta-cell autoimmunity in the general population, program efficacy to prevent DKA and reduce family burden, and precise estimates of the rate of progression to symptomatic disease (i.e., the value predictive positive) or the return to autoantibody negativity (i.e., false positive rate) in children in the general population. Experiences from European countries and the United States, data on the added value of improved long-term care and the reduction of complications, and long-term data on the burden and social implications for families are essential. We will soon have a treatment that delays the disease (teplizumab) available in clinical practice, however the economic benefit of such a treatment is still unknown. If the authorities and insurers accept the screening and treatment of children at risk, organizational efforts and urgent investments will be required . In any case, the voice of patients and their families must be recognized and considered in the decision-making process. In summary, screening for asymptomatic β-cell autoimmunity is possible, is effective in prevention of DKA in children and should be implemented. We do believe that European and Italian authorities should endorse this need in order to early detect children at risk when the disease is in stage 1 or stage 2. This could possibly preserve the beta-cell mass, allow a longer insulin independence and prevent short-term and long-term complications. Below is the link to the electronic supplementary material. Supplementary Material 1: Point-by-point response |
Parent-adolescent sexual and reproductive health information communication in Ghana | ae0f2ac6-a0f1-43cd-b1ab-fa024b2c926d | 11837287 | Health Communication[mh] | Adolescent Sexual and Reproductive Health (SRH) has received attention globally because of the consequences of risky adolescent sexual behaviour, such as early sexual debut, unprotected sex, and multiple sexual partners . Such behaviours lead to sexually transmitted infections (STIs) including HIV/AIDS, unplanned pregnancies with their associated consequences such as unsafe abortion, poor birth outcomes and school dropout . In lower- and middle-income countries (LMICs), adolescents have challenges in meeting their SRH needs, and risky sexual behaviours are more pronounced in such countries . Adolescents face barriers in assessing SRH information from reliable sources such as parents, who are their preferred source of received information. Approximately 13% of adolescent girls and young women globally give birth before age 18 . Sub-Saharan Africa has the highest adolescent birth rates globally, with 93 births per 1,000 girls aged 15–19 years . In Ghana, the pooled prevalence of adolescent pregnancy is approximately 15.4%, with higher rates in rural areas (19.5%) compared to urban areas (10.6%) . In Ghana, there is an increased rate of adolescent pregnancies . This may be because adolescents lack SRH education, are neglected by their parents and face sexual abuse . In curbing these issues, parent-adolescent SRH information communication has been cited to be one of the most important activities . Through this, adolescents gain information that helps them with sexual decision-making, which contributes to improved SRH outcomes . The ability of adolescents to make decisions based on accurate SRH information is very important. SRH information needs to be not only accessible, but also comprehensible to the adolescent . Research has shown that when adolescents acquire knowledge from SRH information communication, they may avoid or delay sexual activities and establish healthy sexual activity in the future . In phase one of this study, which was the systematic review, SRH information identified included (but was not limited to) changes during adolescence, menstruation, sex, abstinence, dating, relationships, pregnancy, safer sex, contraceptives, STIs including HIV/AIDS, contraceptives, abortion, and sexual coercion resistance . Adolescents’ acceptance of SRH information also depends on its source. In the second phase of this study, adolescents mentioned that they prefer receiving SRH information from their parents. This is also in line with the findings of other studies . Parents play an influential role in educating adolescents on SRH; therefore, it is important to look at parent-adolescent SRH information communication. Parent-adolescent SRH information communication is how parents share information, influenced by their values, beliefs, and standards regarding SRH with adolescents. This could influence the knowledge, attitudes and behaviour of adolescents regarding SRH . SRH information communication between parents and adolescents is influenced by the communication skills possessed by both parents and adolescents . Parents as well as adolescents must have some communication skills to be able to share SRH information. This skill needs may be met when parents and adolescents have knowledge and skill to communicate SRH information using a SRH information communication intervention that is culturally sensitive. A culturally sensitive SRH information communication intervention is one that intentionally takes into consideration the parent and adolescent’s values, beliefs and standards to increase the SRH communication skills of both parents and adolescents . Such an intervention is likely to be accepted by people in the Ghanaian context since it takes into account their cultural predisposition. Considering the above, it is important to integrate the findings from the systematic review and the qualitative studies to inform the adaptation or adoption of a culturally sensitive SRH information communication intervention in Ghana. The information, motivation, behavioural skill model was used to organize the findings from the systematic review and the qualitative studies for analysis to be made. Research design This is the fourth of a series of articles on an explanatory sequential mixed methods study on parent-adolescent SRH information communication intervention in Ghana. The first phase was a systematic review of effective interventions for SRH information communication utilising JBI SUMARI and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Further, an exploratory design was used to explain the findings from the systematic review. The integration of findings was elucidated through merging, connecting and building of meta inferences. The connecting scheme for data integration as proposed by was used because data from the qualitative phase was used to explain the systematic review results. The mixed data analysis proposed by Onwuegbuzie and Teddlie was used to combine the data. First, integrated data reduction was conducted. The statistics from the individual studies identified from the systematic review were computed together with the themes generated from the qualitative studies. The researcher then did a parallel graphical representation of quantitative and qualitative data (tables for quantitative parameters and thematic matrices for qualitative parameters) to visually compare the data. Cross-case analysis was done to ensure quantification and qualification. Qualitative and quantitative data were then correlated. Multiple data sets were then merged to create new codes and variables. Finally, meta inferences were derived by reviewing data from both quantitative and qualitative data sets. The steps taken has been summarised on Fig. below. Study setting The study began with a systematic review of sexual and reproductive health (SRH) information communication intervention in low- and middle-income countries (LMICs). Following this, a qualitative descriptive phase was conducted in the Asante Akyem North Municipality of Ghana, an LMIC, to provide localized insights into SRH communication practices. Asante Akim North municipality has a large population of adolescents which is diverse in culture and religion. Such a setting provides an opportunity to gather in-depth data to accurately adapt an intervention for parent-adolescent information communication on SRH that is context specific. The municipality is located in the eastern part of the Asanti region. About 98.2% of the population in the municipality are Ghanaians (95.9% by birth, 0.6% by naturalization and 1.7% with dual nationality) and 0.9% are from other Economic Community of West African States (ECOWAS) countries. With regards to religion, the majority (83.7%) of the persons in the district are Christians, 9% are Muslims and 5.6% do not associate with any religion. This is important to note because religion has been linked with communication and decision making in health . The main languages for communication in the municipality are English and ‘Twi’ languages. Other languages spoken are based on ethnicity. Language is not just considered as part of culture but helps in the transmission of culture . The majority (79.2%) are literate. There are 16,038 adolescents in the district . Description of quantitative phase The quantitative phase was a systematic review of quantitative studies, which was guided by the preferred reporting items for systematic reviews and meta-analyses (PRISMA) and the Joana Briggs Institute (JBI) manual. The review was registered in PROSPERO with registration number, CRD42022297526. Information Sources and Search Strategy a search strategy to search for studies published from January 2011 to December 2021 in EMBASE, CINAHL, PubMed, OVID, Scopus, Cochrane Reviews Library, Web of Science and Science Direct. The keywords used for the search included (Adolescents” OR “Teenagers” OR “Young Adults”) AND (“Parents” OR “Caregiver” OR “Mother” OR “Father” OR “Guardian”) AND (“Sexual” AND “Reproductive” AND “Health”) AND (“Information” AND “Communication”) AND (“Interventions” OR “Strategies” OR “Best Practices”). Studies that trained parents, adolescents (13 to 16 years of age) or both on SRH information communication was selected. The authors considered both experimental and quasi-experimental designs, including randomized control trials, in LMICs. Study Selection After the search in the above-mentioned databases, the citations identified were uploaded into Mendeley (1.19.8) and screened by two reviewers. Potentially relevant studies were imported into the JBI system for the unified management, assessment, and review of information (JBI SUMARI). The inclusion criteria were used to assess the full text of the studies by the researcher and his supervisor, and the results were reported. Inclusion and Exclusion Criteria this review included both experimental and quasi-experimental study designs, including randomized controlled trials in LMICs, that exposed parents, adolescents (13 to 16 years) or both to SRH communication interventions. This age group comprises adolescents in the latter stage of early adolescence and those in middle adolescence and represents a transition from early to late adolescence. Since some studies did not specifically include this age group, relevant studies were included if at least 50% of the participants were between the ages of 13 and 16; results were stratified according to age groups. Assessment of methodological quality / critical appraisal included studies were critically appraised by the researcher and his supervisor independently, using the JBI standardized critical appraisal tools . Data Extraction and Synthesis Standardized data such as authors, study aims, participants, and settings were extracted from each study, as well as information regarding the intervention components and outcomes. A narrative synthesis approach was used to synthesize data because of the heterogenous nature of the study outcomes and the intervention approaches. Description of the qualitative phase Study Population and Sampling In the qualitative phase, Parent adolescent dyads were recruited but interviewed separately. Parents included a biological father or mother, or the male or female guardians of an adolescent (13 to 16 years of age) who were willing to participate in the study. Adolescents aged 13 to 16 years were included in the study because the purposive sampling approach was used to sample parents and adolescents. Flyers with information were distributed to potential participants. Parental consent and child assent forms were signed by parents and adolescents respectively prior to inclusion in the study. To include the nuances of the developmental needs of adolescents, the authors included 5 parents of adolescents aged 13 to 14 years and 5 parents whose adolescents were 15 to 16 years of age. Maximum variability was ensured by sampling at least three male and three female parents. Adolescents who were cohabiting or married were excluded from the study, as were their parents. The final sample comprised 10 parents and 10 adolescents based on saturation. Data saturation was achieved after the 8th parent-adolescent pair, as no new themes or codes emerged during the iterative process of data analysis. To validate this, two additional interviews were conducted, which further confirmed the recurring themes. This approach aligns with established qualitative research guidelines that define saturation as the point at which additional data does not contribute new information or insights Data Collection In the qualitative phase, individual interviews were conducted, using a semi-structured interview guide. The interview guide was developed by the researcher in consultation with the supervisory team, based on the findings of the systematic review and the IMB skills model, to obtain data from parents and adolescents. The guide comprised open-ended questions with probes and prompts to ensure that spontaneous, rich and vivid data was obtained from the context. A pilot interview was initially conducted with a parent and her adolescent separately to assess the clarity of the guide and the time estimated for the interviews. The data was transcribed and analysed and afterwards discussed with the supervisors. The interview guide was slightly modified after the pilot study, to ensure that the SRH information communication skill needs would be well explored. Consent was obtained from parents and child assent was obtained from adolescents. After that, interviews were conducted at a place and time convenient to the participants. The researcher scheduled a convenient date, time and venue, suitable for participants. Since parents preferred interviews to be conducted in their home environments, both parent and adolescent interviews took place in the same environment, although they were interviewed separately. Face-to-face, semi-structured interviews were conducted by the researcher and the assistants, who have experience in conducting qualitative interviews. The researcher sought permission from the participants and audio-recorded the interviews. Although parent-adolescent dyads were recruited, interviews were held separately to avoid the potential of power dynamics to affect the quality of data obtained from the adolescents. Parents were not involved and were not present when adolescents were interviewed. The interviews were held in the presence of the researcher, and his assistants and the participant only. The interviews lasted for 35 to 60 min. Data collection for the second phase of the study took place between August 2022 and January 2023. Data Analysis and Synthesis in the qualitative study, data was analysed inductively following Braun and Clake’s approach to thematic analysis . The doctoral student listened to the recorded interviews to familiarise himself with the data before and after transcription. Data was transcribed verbatim in Asante Twi or English. The interviews which were conducted in Asante Twi were translated into English after transcription before being retranslated into Asante Twi and confirmed, to ensure that participants’ voices were duly represented. After this, the doctoral student compared the transcript with the recorded interviews to check for accuracy with transcription. While participant verification of the English translations was not feasible in this instance, the multi-step translation and verification process was designed to uphold the integrity and accuracy of the data. The transcribed data was then uploaded into Atlas.ti software, version 23.0.7, for it to be organized into meaningful units to generate initial codes. The codes were put together to generate themes and subthemes, and these were refined to avoid overlap between the themes. Conclusions were drawn from the identified categories and themes to align with the objectives. The themes were defined and named, and then the report was produced. Mixed method integration The aim of the mixed method integration was to describe how the qualitative findings explained the SR quantitative findings. The independent intramethod approach, where separate analysis is done for the quantitative and qualitative studies, was employed in the mixed analysis. Inferences were drawn from each analysis and the findings were compared for interpretations to be made. A mixed data collection inventory was first taken which included as many data points as possible. Interpretive analysis was then done to interpret the data patterns. Back-and-forth exchange was used to identify data linkages. Joint displays were then used to organize linked data. The findings were examined for complementarity and divergence. The authors returned to theory to find explanations and examined intramethod findings for possible biases to handle divergence. Meta-inferences were then generated. This is the fourth of a series of articles on an explanatory sequential mixed methods study on parent-adolescent SRH information communication intervention in Ghana. The first phase was a systematic review of effective interventions for SRH information communication utilising JBI SUMARI and the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). Further, an exploratory design was used to explain the findings from the systematic review. The integration of findings was elucidated through merging, connecting and building of meta inferences. The connecting scheme for data integration as proposed by was used because data from the qualitative phase was used to explain the systematic review results. The mixed data analysis proposed by Onwuegbuzie and Teddlie was used to combine the data. First, integrated data reduction was conducted. The statistics from the individual studies identified from the systematic review were computed together with the themes generated from the qualitative studies. The researcher then did a parallel graphical representation of quantitative and qualitative data (tables for quantitative parameters and thematic matrices for qualitative parameters) to visually compare the data. Cross-case analysis was done to ensure quantification and qualification. Qualitative and quantitative data were then correlated. Multiple data sets were then merged to create new codes and variables. Finally, meta inferences were derived by reviewing data from both quantitative and qualitative data sets. The steps taken has been summarised on Fig. below. The study began with a systematic review of sexual and reproductive health (SRH) information communication intervention in low- and middle-income countries (LMICs). Following this, a qualitative descriptive phase was conducted in the Asante Akyem North Municipality of Ghana, an LMIC, to provide localized insights into SRH communication practices. Asante Akim North municipality has a large population of adolescents which is diverse in culture and religion. Such a setting provides an opportunity to gather in-depth data to accurately adapt an intervention for parent-adolescent information communication on SRH that is context specific. The municipality is located in the eastern part of the Asanti region. About 98.2% of the population in the municipality are Ghanaians (95.9% by birth, 0.6% by naturalization and 1.7% with dual nationality) and 0.9% are from other Economic Community of West African States (ECOWAS) countries. With regards to religion, the majority (83.7%) of the persons in the district are Christians, 9% are Muslims and 5.6% do not associate with any religion. This is important to note because religion has been linked with communication and decision making in health . The main languages for communication in the municipality are English and ‘Twi’ languages. Other languages spoken are based on ethnicity. Language is not just considered as part of culture but helps in the transmission of culture . The majority (79.2%) are literate. There are 16,038 adolescents in the district . The quantitative phase was a systematic review of quantitative studies, which was guided by the preferred reporting items for systematic reviews and meta-analyses (PRISMA) and the Joana Briggs Institute (JBI) manual. The review was registered in PROSPERO with registration number, CRD42022297526. Information Sources and Search Strategy a search strategy to search for studies published from January 2011 to December 2021 in EMBASE, CINAHL, PubMed, OVID, Scopus, Cochrane Reviews Library, Web of Science and Science Direct. The keywords used for the search included (Adolescents” OR “Teenagers” OR “Young Adults”) AND (“Parents” OR “Caregiver” OR “Mother” OR “Father” OR “Guardian”) AND (“Sexual” AND “Reproductive” AND “Health”) AND (“Information” AND “Communication”) AND (“Interventions” OR “Strategies” OR “Best Practices”). Studies that trained parents, adolescents (13 to 16 years of age) or both on SRH information communication was selected. The authors considered both experimental and quasi-experimental designs, including randomized control trials, in LMICs. Study Selection After the search in the above-mentioned databases, the citations identified were uploaded into Mendeley (1.19.8) and screened by two reviewers. Potentially relevant studies were imported into the JBI system for the unified management, assessment, and review of information (JBI SUMARI). The inclusion criteria were used to assess the full text of the studies by the researcher and his supervisor, and the results were reported. Inclusion and Exclusion Criteria this review included both experimental and quasi-experimental study designs, including randomized controlled trials in LMICs, that exposed parents, adolescents (13 to 16 years) or both to SRH communication interventions. This age group comprises adolescents in the latter stage of early adolescence and those in middle adolescence and represents a transition from early to late adolescence. Since some studies did not specifically include this age group, relevant studies were included if at least 50% of the participants were between the ages of 13 and 16; results were stratified according to age groups. Assessment of methodological quality / critical appraisal included studies were critically appraised by the researcher and his supervisor independently, using the JBI standardized critical appraisal tools . Data Extraction and Synthesis Standardized data such as authors, study aims, participants, and settings were extracted from each study, as well as information regarding the intervention components and outcomes. A narrative synthesis approach was used to synthesize data because of the heterogenous nature of the study outcomes and the intervention approaches. Study Population and Sampling In the qualitative phase, Parent adolescent dyads were recruited but interviewed separately. Parents included a biological father or mother, or the male or female guardians of an adolescent (13 to 16 years of age) who were willing to participate in the study. Adolescents aged 13 to 16 years were included in the study because the purposive sampling approach was used to sample parents and adolescents. Flyers with information were distributed to potential participants. Parental consent and child assent forms were signed by parents and adolescents respectively prior to inclusion in the study. To include the nuances of the developmental needs of adolescents, the authors included 5 parents of adolescents aged 13 to 14 years and 5 parents whose adolescents were 15 to 16 years of age. Maximum variability was ensured by sampling at least three male and three female parents. Adolescents who were cohabiting or married were excluded from the study, as were their parents. The final sample comprised 10 parents and 10 adolescents based on saturation. Data saturation was achieved after the 8th parent-adolescent pair, as no new themes or codes emerged during the iterative process of data analysis. To validate this, two additional interviews were conducted, which further confirmed the recurring themes. This approach aligns with established qualitative research guidelines that define saturation as the point at which additional data does not contribute new information or insights Data Collection In the qualitative phase, individual interviews were conducted, using a semi-structured interview guide. The interview guide was developed by the researcher in consultation with the supervisory team, based on the findings of the systematic review and the IMB skills model, to obtain data from parents and adolescents. The guide comprised open-ended questions with probes and prompts to ensure that spontaneous, rich and vivid data was obtained from the context. A pilot interview was initially conducted with a parent and her adolescent separately to assess the clarity of the guide and the time estimated for the interviews. The data was transcribed and analysed and afterwards discussed with the supervisors. The interview guide was slightly modified after the pilot study, to ensure that the SRH information communication skill needs would be well explored. Consent was obtained from parents and child assent was obtained from adolescents. After that, interviews were conducted at a place and time convenient to the participants. The researcher scheduled a convenient date, time and venue, suitable for participants. Since parents preferred interviews to be conducted in their home environments, both parent and adolescent interviews took place in the same environment, although they were interviewed separately. Face-to-face, semi-structured interviews were conducted by the researcher and the assistants, who have experience in conducting qualitative interviews. The researcher sought permission from the participants and audio-recorded the interviews. Although parent-adolescent dyads were recruited, interviews were held separately to avoid the potential of power dynamics to affect the quality of data obtained from the adolescents. Parents were not involved and were not present when adolescents were interviewed. The interviews were held in the presence of the researcher, and his assistants and the participant only. The interviews lasted for 35 to 60 min. Data collection for the second phase of the study took place between August 2022 and January 2023. Data Analysis and Synthesis in the qualitative study, data was analysed inductively following Braun and Clake’s approach to thematic analysis . The doctoral student listened to the recorded interviews to familiarise himself with the data before and after transcription. Data was transcribed verbatim in Asante Twi or English. The interviews which were conducted in Asante Twi were translated into English after transcription before being retranslated into Asante Twi and confirmed, to ensure that participants’ voices were duly represented. After this, the doctoral student compared the transcript with the recorded interviews to check for accuracy with transcription. While participant verification of the English translations was not feasible in this instance, the multi-step translation and verification process was designed to uphold the integrity and accuracy of the data. The transcribed data was then uploaded into Atlas.ti software, version 23.0.7, for it to be organized into meaningful units to generate initial codes. The codes were put together to generate themes and subthemes, and these were refined to avoid overlap between the themes. Conclusions were drawn from the identified categories and themes to align with the objectives. The themes were defined and named, and then the report was produced. The aim of the mixed method integration was to describe how the qualitative findings explained the SR quantitative findings. The independent intramethod approach, where separate analysis is done for the quantitative and qualitative studies, was employed in the mixed analysis. Inferences were drawn from each analysis and the findings were compared for interpretations to be made. A mixed data collection inventory was first taken which included as many data points as possible. Interpretive analysis was then done to interpret the data patterns. Back-and-forth exchange was used to identify data linkages. Joint displays were then used to organize linked data. The findings were examined for complementarity and divergence. The authors returned to theory to find explanations and examined intramethod findings for possible biases to handle divergence. Meta-inferences were then generated. Summary of quantitative results Following a thorough search of databases, 1,706 studies were retrieved. Thirteen studies were included, following title and abstract screening. Five studies were included when full text screening and narrative synthesis were done because of the heterogenous nature of the studies. Four studies were RCTs and one used quasi-experimental design. Two of the studies that were included were from Iran and one each from Tanzania, South Africa, and Uganda. On average, the studies focused on adolescents between 13 and 16 years of age. One of the studies delivered the intervention to both parents and their adolescents. In the other four studies, interventions were delivered only to parents. The method of intervention delivery included role plays, lectures, group discussions, posters, games, and take-home assignments. Intervention delivery was done by experts such as a SRH education and adolescent counsellor, a consultant midwifery student with certificate in sexual training of children and adolescents, teachers, and HIV peer educators. The various interventions were delivered in health centers, community, worksite, and school settings. Some of the studies had components other than SRH communication, such as normal sexual development and condom use behaviour. SRH communication was found to be influenced by SRH information, motivation, or attitudes towards SRH information communication, and SRH communication skills. Summary of qualitative findings Ten parent-adolescent pairs were selected to participate in the study. Four of the parents were males and six were females based. Nine of the participants were married and the other was divorced. Two of them had no formal education, whilst the rest had received formal education, with one ending at the primary level and the rest at the tertiary level. Eight of the participants were Christians and two were Muslims. Five themes emerged from the study in the first theme, SRH topics that were discussed by parents as well as the sources of information for the discussion were labelled as SRH information communicated . The second theme related to the elements that either encouraged or discouraged parents from discussing SRH issues with their adolescents. This was labelled individual parent and adolescent factors . The third theme was about the perception of the parents regarding support from significant others and the behaviour of the community towards SRH communication with adolescents. This was labelled contextual factors influencing SRH information communication . The fourth theme was on how parents share SRH information with their adolescents. This was labelled SRH communication skill needs of parents . The last theme was on how parents would want SRH intervention to be packaged to meet their needs which considers the method of delivery, experts to deliver and venue for delivery of intervention and this was labelled Context specific Information Communication Intervention . In the second qualitative phase, 10 adolescents of the parents interviewed in the first qualitative phase were selected. Their ages ranged from 13 to 16 years, based on the Systematic Review participant population. There were six females and four males. Seven of the participants were at the Junior High School level and the other three were at the Senior High School level. Eight of the participants were Christians and two were Muslims. Five themes emerged from the study. The adolescent and parent factors that influenced the SRH communication between adolescents and their parents were labelled adolescent and parent concerns that influence SRH communication skills . Adolescents’ perception of the support from significant others, and cultural norms that influence their SRH communication with their parents, were labelled sociocultural issues influencing communication skills . The process by which adolescents share SRH information with their parents was labelled SRH information communication that influences communication skills . The question of the skills that are needed by the adolescents for communicating SRH information with their parents was labelled SRH information communication skill needs . The last theme was on how the adolescents would want SRH intervention to be packaged to meet their needs which considers the method of delivery, experts to deliver and venue for delivery of intervention and this was labelled Context specific Information Communication Intervention . Mixed method findings The findings from the Systematic Review and qualitative study revealed seven areas of focus which are, method of intervention delivery, experts involved in the delivery of intervention, venue or place of intervention delivery, SRH information, motivation, SRH information skills and SRH information communication (Fig. ). Regarding the method of delivery, the qualitative findings explained the quantitative findings in the Systematic Review in that, what emerged in the qualitative data were methods that had been used in the studies identified in the systematic review. Various methods were used across the identified interventions in the delivery. The methods included lectures, games, group discussions, role-plays and brainstorming ; workshop, classroom teaching (which included role plays, debates and writing exercises), homework assignments ; lectures, posters, group discussions, exercises, role-plays and creating scenarios ; workshop making use of group sessions ; group Counseling . Parents in the qualitative study mentioned that they would prefer workshop with lectures, role plays, group discussions, role plays, discussion and brainstorming (Table ). Adolescents preferred classroom teaching with homework and games (Table ). The illustrative quotes have been given on Table . It is likely that the delivery method employed by Seif and colleagues would be accepted in the Ghanaian context. Regarding the experts used to deliver the interventions (Table ), the qualitative findings explained the quantitative findings in the Systematic Review in that, what emerged in the qualitative data were experts with similar background that had been used in the studies identified in the systematic review. Teachers ; expert in sexuality health education, adolescent counsellor ; peer HIV educators, clinical psychologists ; and consultant midwifery student were found in the systematic review (Table ). In the qualitative study, nurses and midwives also emerged as experts that would be preferred (Table ). Parents only wanted that their children to be taught by someone who holds their culture in high esteem so that adolescents would receive a culturally appropriate SRH information. In respect of the setting for the study (Table ), the qualitative findings explained the findings in the systematic review. In the systematic review, school setting ; worksite ; community ; health centre . In the qualitative study, it emerged that parents mentioned that they would want to receive the training at a community centre. Adolescents preferred their school environment. Illustrative quotes have been given on Table . With respect to the SRH information, only two studies measured the SRH information communicated by parents and adolescents. Various SRH information had been communicated by parents and adolescents as identified across the studies. These included the range of SRH topics. However, it was noted that in some studies, few topics were in discussed, why in others, most topics had been discussed. No single study had discussed all SRH topics. This has been well presented in article 1. In the qualitative study, parents and adolescents had discussed about pregnancy, STIs, pregnancy and STIs prevention, abortion, abstinence, sex, changes in adolescence, personal hygiene. These had been discussed across parents and adolescents. Contraceptives was not really discussed among majority of parents and adolescents. Most parents mentioned that they will not talk about it because of the age of the adolescents and others because it is not culturally appropriate. Information sources for parents included relatives, books, church, mosque, the media, personal experiences whilst parents, books, school, relatives, peers, church, mosque and the media. Illustration quotes have been given on Table . On account of what motivated parents and adolescents to communicate, the qualitative findings explained the quantitative findings. This was measured by Seif and colleagues and in their studies. Personal and social motivation emerged in the qualitative study to be factors that influence SRH information communication skills and the search for SRH information Table ). Regarding the SRH information communication skills, the qualitative findings explained the quantitative findings. Those who were trained in the various interventions identified in the systematic review had improved SRH information communication skills as compared to those in the control group. In the qualitative study, because no intervention had been delivered, it was found that parents and adolescents lacked SRH information communication skills (Table ). This explains the need for a culturally sensitive SRH information communication intervention to be used to train parents and adolescents on SRH information communication. Lastly SRH information communication was also identified as a theme (Table ). Regarding this, the qualitative findings explained the quantitative findings from the systematic review. After the various interventions, frequency of communication improved. In the qualitative study, it emerged that parents and adolescents do not frequently communicate SRH information communication. It is believed that, when there is an intervention, it could assist in training them to have the skills that will translate into frequent SRH information communication. Following a thorough search of databases, 1,706 studies were retrieved. Thirteen studies were included, following title and abstract screening. Five studies were included when full text screening and narrative synthesis were done because of the heterogenous nature of the studies. Four studies were RCTs and one used quasi-experimental design. Two of the studies that were included were from Iran and one each from Tanzania, South Africa, and Uganda. On average, the studies focused on adolescents between 13 and 16 years of age. One of the studies delivered the intervention to both parents and their adolescents. In the other four studies, interventions were delivered only to parents. The method of intervention delivery included role plays, lectures, group discussions, posters, games, and take-home assignments. Intervention delivery was done by experts such as a SRH education and adolescent counsellor, a consultant midwifery student with certificate in sexual training of children and adolescents, teachers, and HIV peer educators. The various interventions were delivered in health centers, community, worksite, and school settings. Some of the studies had components other than SRH communication, such as normal sexual development and condom use behaviour. SRH communication was found to be influenced by SRH information, motivation, or attitudes towards SRH information communication, and SRH communication skills. Ten parent-adolescent pairs were selected to participate in the study. Four of the parents were males and six were females based. Nine of the participants were married and the other was divorced. Two of them had no formal education, whilst the rest had received formal education, with one ending at the primary level and the rest at the tertiary level. Eight of the participants were Christians and two were Muslims. Five themes emerged from the study in the first theme, SRH topics that were discussed by parents as well as the sources of information for the discussion were labelled as SRH information communicated . The second theme related to the elements that either encouraged or discouraged parents from discussing SRH issues with their adolescents. This was labelled individual parent and adolescent factors . The third theme was about the perception of the parents regarding support from significant others and the behaviour of the community towards SRH communication with adolescents. This was labelled contextual factors influencing SRH information communication . The fourth theme was on how parents share SRH information with their adolescents. This was labelled SRH communication skill needs of parents . The last theme was on how parents would want SRH intervention to be packaged to meet their needs which considers the method of delivery, experts to deliver and venue for delivery of intervention and this was labelled Context specific Information Communication Intervention . In the second qualitative phase, 10 adolescents of the parents interviewed in the first qualitative phase were selected. Their ages ranged from 13 to 16 years, based on the Systematic Review participant population. There were six females and four males. Seven of the participants were at the Junior High School level and the other three were at the Senior High School level. Eight of the participants were Christians and two were Muslims. Five themes emerged from the study. The adolescent and parent factors that influenced the SRH communication between adolescents and their parents were labelled adolescent and parent concerns that influence SRH communication skills . Adolescents’ perception of the support from significant others, and cultural norms that influence their SRH communication with their parents, were labelled sociocultural issues influencing communication skills . The process by which adolescents share SRH information with their parents was labelled SRH information communication that influences communication skills . The question of the skills that are needed by the adolescents for communicating SRH information with their parents was labelled SRH information communication skill needs . The last theme was on how the adolescents would want SRH intervention to be packaged to meet their needs which considers the method of delivery, experts to deliver and venue for delivery of intervention and this was labelled Context specific Information Communication Intervention . The findings from the Systematic Review and qualitative study revealed seven areas of focus which are, method of intervention delivery, experts involved in the delivery of intervention, venue or place of intervention delivery, SRH information, motivation, SRH information skills and SRH information communication (Fig. ). Regarding the method of delivery, the qualitative findings explained the quantitative findings in the Systematic Review in that, what emerged in the qualitative data were methods that had been used in the studies identified in the systematic review. Various methods were used across the identified interventions in the delivery. The methods included lectures, games, group discussions, role-plays and brainstorming ; workshop, classroom teaching (which included role plays, debates and writing exercises), homework assignments ; lectures, posters, group discussions, exercises, role-plays and creating scenarios ; workshop making use of group sessions ; group Counseling . Parents in the qualitative study mentioned that they would prefer workshop with lectures, role plays, group discussions, role plays, discussion and brainstorming (Table ). Adolescents preferred classroom teaching with homework and games (Table ). The illustrative quotes have been given on Table . It is likely that the delivery method employed by Seif and colleagues would be accepted in the Ghanaian context. Regarding the experts used to deliver the interventions (Table ), the qualitative findings explained the quantitative findings in the Systematic Review in that, what emerged in the qualitative data were experts with similar background that had been used in the studies identified in the systematic review. Teachers ; expert in sexuality health education, adolescent counsellor ; peer HIV educators, clinical psychologists ; and consultant midwifery student were found in the systematic review (Table ). In the qualitative study, nurses and midwives also emerged as experts that would be preferred (Table ). Parents only wanted that their children to be taught by someone who holds their culture in high esteem so that adolescents would receive a culturally appropriate SRH information. In respect of the setting for the study (Table ), the qualitative findings explained the findings in the systematic review. In the systematic review, school setting ; worksite ; community ; health centre . In the qualitative study, it emerged that parents mentioned that they would want to receive the training at a community centre. Adolescents preferred their school environment. Illustrative quotes have been given on Table . With respect to the SRH information, only two studies measured the SRH information communicated by parents and adolescents. Various SRH information had been communicated by parents and adolescents as identified across the studies. These included the range of SRH topics. However, it was noted that in some studies, few topics were in discussed, why in others, most topics had been discussed. No single study had discussed all SRH topics. This has been well presented in article 1. In the qualitative study, parents and adolescents had discussed about pregnancy, STIs, pregnancy and STIs prevention, abortion, abstinence, sex, changes in adolescence, personal hygiene. These had been discussed across parents and adolescents. Contraceptives was not really discussed among majority of parents and adolescents. Most parents mentioned that they will not talk about it because of the age of the adolescents and others because it is not culturally appropriate. Information sources for parents included relatives, books, church, mosque, the media, personal experiences whilst parents, books, school, relatives, peers, church, mosque and the media. Illustration quotes have been given on Table . On account of what motivated parents and adolescents to communicate, the qualitative findings explained the quantitative findings. This was measured by Seif and colleagues and in their studies. Personal and social motivation emerged in the qualitative study to be factors that influence SRH information communication skills and the search for SRH information Table ). Regarding the SRH information communication skills, the qualitative findings explained the quantitative findings. Those who were trained in the various interventions identified in the systematic review had improved SRH information communication skills as compared to those in the control group. In the qualitative study, because no intervention had been delivered, it was found that parents and adolescents lacked SRH information communication skills (Table ). This explains the need for a culturally sensitive SRH information communication intervention to be used to train parents and adolescents on SRH information communication. Lastly SRH information communication was also identified as a theme (Table ). Regarding this, the qualitative findings explained the quantitative findings from the systematic review. After the various interventions, frequency of communication improved. In the qualitative study, it emerged that parents and adolescents do not frequently communicate SRH information communication. It is believed that, when there is an intervention, it could assist in training them to have the skills that will translate into frequent SRH information communication. It was identified that the success of SRH information communication is determined by the skill possessed by the parents and their adolescents. For the communication to take place successfully, information that is culturally appropriate, which considers the cultural values as a motivation is likely to improve the attitudes of parents and adolescents towards SRH information communication. A culturally sensitive SRH information communication intervention must consider the method of delivery, the place of delivery and the experts involved in the delivery. Making use of classroom teaching in a school for adolescents, and workshops in the community for parents, may be culturally appropriate and explains some of the methods that were used in the various interventions identified in the quantitative phase. Health professional, especially nurses and midwives, who value the cultural norms may be contextually appropriate, which explains the findings from the quantitative phase. Parents would want to know what their adolescents are being taught and would prefer that they are present. This study was to systematically integrate data from a systematic review of effective SRH information communication interventions in LMICs and a qualitative study in Ghana. This was done to know the components to consider when adapting a SRH information communication intervention into the Ghanaian setting. The study highlights important components of SRH information communication interventions, and the culturally sensitive components that can be factored into them, in order to have a contextually appropriate intervention for Ghanaian parents and adolescents. This is necessary because interventions are more likely to be accepted if they have values, beliefs and behavioural sensitivity that meet those of the user . Some barriers, such as social norms and lack of skilled health professionals, have been identified as hinderances to the acceptability of SRH interventions . This being the case, paying attention to the voice of the participants and their preferences will help to adapt an intervention so that it is culturally sensitive and likely to be accepted by parents and adolescents in the Ghanaian context. Healthcare professionals such as nurses and midwives are likely to deliver the interventions effectively. This finding in the mixed method integration has been implemented in many SRH interventions . Some interventions have also used a multidisciplinary health team for delivery . What is generally known is that intervention studies that involve adolescents make use of those who directly care for adolescents or experts in adolescent health . Thus, healthcare workers who have been trained to specially care for adolescents, especially with regards to SRH, will be beneficial and contextually appropriate for the intervention delivery. The concern of parents was that the healthcare professionals delivering the intervention should be familiar with their cultural values so that they are not compromised. Another relevant finding of the mixed method integration concerns the setting for intervention delivery. This is the venue where the intervention delivery takes place. It emerged that a school-based setting or a community setting for adolescents and a community setting for parents are preferred. Most interventions in SRH for adolescents have used school-based settings, making use of a group format , and this has been effective. For instance, Barron and Punamki and colleagues used school-based settings with a group format and it was effective. Adolescents may feel comfortable in schools because they may be used to the environment and might have already have been having group discussions there. Using what they are interested in may lead to SRH information communication skill acquisition to communicate SRH information. A community-based approach for parents and or adolescents has been effective in some interventions . Regarding the method of delivery, it was found that parents and adolescents preferred role play, lectures, group discussions and teaching which were also used in the various interventions that were identified in the systematic review. Different delivery modalities, such as lectures, have been employed in various studies to deliver SRH programmes for adolescents, such as in China where there is conservativeness in adolescent SRH . There is the possibility of these methods working effectively in a SRH information communication intervention delivery since they have been used in similar interventions in LMICs, as identified in the systematic review. SRH information that is usually communicated by parents and adolescents is influenced by social norms . In the Ghanaian context, as identified from the qualitative data, the SRH information that has been mostly communicated by parents and their adolescents has included menstruation, STIs, pregnancy, abstinence, changes in adolescence, sex and sexual relationships. This supports earlier studies in Ghana . This explained the data gathered from the systematic review in phase one. SRH is a sensitive area, especially regarding adolescents, and parents always want to preserve their cultural values. For instance, in the implementation of the DARAJA curriculum in Zanzibar, parents opposed the discussion regarding condom use and other contraceptive methods with adolescents, and consequently that part was omitted before the implementation . In South Africa, while stigma persists, interventions targeting parents to facilitate open discussions have shown promise, demonstrating that culturally sensitive approaches can mitigate these barriers . Studies in India, such as those by Mehra and colleagues , show comparable challenges, with family members avoiding conversations about contraception and sexual health due to traditional beliefs about purity and morality. This suggests that what may be accepted in the Ghanaian context is what has been mentioned by participants in the qualitative data from Ghana. Therefore, this must be taken into consideration in the adaptation of the SRH information communication for parents and adolescents in Ghana. This is because the effectiveness of interventions in public health is usually associated with the social and cultural context in which the implementation takes place. An effective intervention may not be effective in a new setting . The varying effectiveness of SRH communication interventions across contexts can be attributed to a range of contextual factors. Cultural norms play a pivotal role; in settings where SRH discussions are culturally taboo, interventions often face resistance, as seen in regions with conservative social norms . Conversely, interventions in culturally open societies tend to achieve greater success. Caregiver comfort levels are another critical factor. Interventions that include caregiver training to build confidence and communication skills are more effective. However, caregivers who feel unprepared or perceive SRH discussions as promoting inappropriate behaviors may limit the success of such interventions . Adolescent receptiveness also significantly affects outcomes. Adolescents who view their caregivers as approachable and nonjudgmental are more likely to engage in meaningful conversations. Interventions in such contexts have shown higher effectiveness. In contrast, authoritarian or dismissive parental attitudes can reduce intervention impact, highlighting the importance of relational dynamics in SRH communication . The findings also suggest that parents and adolescents need skills to communicate SRH information. This supports an earlier study in Ghana . Lack of required skill can be a barrier to SRH information communication . This highlights the need for intervention to train parents and adolescents to develop skills for SRH information communication. Appropriate delivery methods are believed to contribute to SRH information communication skill development . Evidence from the literature posits that the openness and frequency of SRH information communication may increase when parents and adolescents possess the skill for communicating such information . The qualitative phase provided data that is contextually appropriate because this will inform the adaptation of a culturally sensitive SRH information communication intervention. The integration of results from the systematic review of SRH information communication interventions and the qualitative data which made use of rigorous methods increases the value of the findings. Only studies reported in English were considered in the systematic review; studies in other languages were therefore excluded. The age of adolescents was limited to 13 to 16 years, and this excludes the needs of adolescents and parents of adolescents and the interventions where the mean age was lower or higher than the stated age range. This study used an explanatory sequential mixed method approach to systematically integrate data from a systematic review of effective SRH information communication interventions in LMICs and a qualitative study in Ghana. The qualitative data explained the quantitative data from the systematic reviews and has provided the basis for the adaptation of a culturally sensitive SRH information communication intervention in Ghana. The study reports on the methods of delivery, the use of experts in delivery and the delivery settings which are culturally appropriate. In all, parents and adolescents have the desire to communicate SRH information. However, they want to share information that preserves their cultural values. Both parents and adolescents lacked skills to share this information. So SRH information communication intervention which is culturally sensitive may help to improve communication skills for parents and adolescents. The development of localized SRH education curricula that involve community stakeholders, including parents, traditional leaders, and educators are therefore recommended. These curricula would emphasize culturally appropriate ways to discuss sensitive topics with adolescents. There should be formulation of policies that institutionalize parent-focused SRH workshops as part of community health initiatives. Such workshops can equip parents with the skills and confidence to navigate SRH discussions within their cultural contexts. |
Using the Correlation Intensity Index to Build a Model of Cardiotoxicity of Piperidine Derivatives | 0ea70de6-0f93-4865-a40d-daa9a686e201 | 10535953 | Pharmacology[mh] | The risk of developing cardiotoxicity against the background of treating carcinogenic pathologies is one of the most urgent problems of modern oncology and cardiology. Piperidine derivatives are of exceptional interest due to their potential biological activity, such as antiviral, antibacterial, antitumor, and many others . Current anticancer therapy includes many drugs with various mechanisms and a spectrum of actions. One of the most important groups in this number of drugs are antibiotics with antitumor activity, which play a massive role in the chemotherapy of various oncological diseases . Their effectiveness has been clinically proven. However, despite the favorable course of the disease, when using this group of drugs, patients experience a number of undesirable side effects from various organs and systems that can develop not only during therapy but also after its completion. One of the main side effects is cardiotoxicity. This term includes various adverse events from the cardiovascular system against the background of drug therapy for oncological diseases. Such manifestations of cardiotoxicity as pain in the heart, blood pressure, heart rhythm disturbances, myocarditis, pericarditis, and heart attacks reduce a patient’s quality of life. Still, sometimes they become serious reasons for discontinuing or not prescribing the drug. For some medicines in this group, for example, alkylating agents, cardiotoxicity is a limiting factor. Antibiotics with antitumor activity currently occupy a leading place in the treatment of oncological diseases ; as a result, the correction of their side effects, particularly cardiotoxicity, remains one of the most urgent problems for oncologists, cardiologists, and general practitioners. The frequency of development of various dysfunctions of the heart reaches greater values. At the same time, both reversible and irreversible consequences are quite dangerous. Prevention and treatment of cardiotoxicity remain mandatory, but complicated clinical tasks for a doctor due to the irreversibility and progressive nature of the disease changes most in the functioning of the cardiovascular system. Cardiotoxicity complications significantly impair patients’ quality of life and reduce the duration of life, and mortality from cardiovascular diseases still globally ranks first . In addition, psychopharmacology and psychopharmacotherapy of depressive states are dynamically developing areas, and antidepressants are the second most prescribed drugs among all psychotropic drugs . Such a high rating of these psychotropic drugs is because about 5% of the world’s population suffers from depression (according to WHO). However, high doses and long-term use of medications in this group lead to cardiotoxic effects. The cardiotoxicity of tricyclic antidepressants is manifested with conduction disturbances in the atrioventricular node and ventricles of the heart (quinine-like action), arrhythmias, and a decrease in myocardial contractility. Doxepin and amoxapine have the least cardiotoxicity. Treatment of patients with cardiovascular pathology with tricyclic antidepressants should be carefully monitored, and high doses should not be used. Cardiovascular diseases such as coronary artery disease, valvular heart disease, arrhythmias, and heart failure are serious health risks and often require lifelong treatment. Much attention is paid to the psychological consequences of life for people with cardiac diseases . Anxiety symptoms are common among patients with cardiovascular diseases and may worsen the prognosis for these patients. Symptoms of anxiety and depression can prevent lifestyle changes and adherence to therapy, as well as reduce the effectiveness of cardiac rehabilitation. There is increasing evidence of the widespread use of psychotropic drugs in cardiac patients for comorbid psychiatric disorders . The side effects of psychotropic medications on the part of the cardiovascular system include disturbances in the rhythm and conduction of the heart. For example, recent studies have shown that antidepressants were associated with increased mortality and an altered beta-blocker effect in patients with heart failure. In addition, the use of antipsychotic drugs in patients after acute myocardial infarction is necessary . An increase in morbidity and mortality under the influence of depression in patients with cardiovascular diseases dictates the imperative need for preliminary analysis of both drugs’ therapeutic efficacy and cardiotoxic potential . In other words, when developing a treatment plan for a depressed patient with heart disease, one should carefully weigh any intervention’s risk/benefit ratio. However, the choice of antidepressants is complicated because many can cause cardiovascular side effects, such as orthostatic hypotension, hypertension, and impaired cardiac conduction. In addition, clinically significant drug interactions should be considered when choosing treatment. Unfortunately, the number of clinical studies explicitly investigating the safety of antidepressants in patients with cardiovascular disease is limited, and the studies that are conducted have generally included a small number of patients. The human ether-a-go-go-related gene (hERG) potassium channel plays a pivotal role in cardiac rhythm regulation, and the cardiotoxicity data associated with hERG inhibition using drugs and environmental chemicals provides important information for medicinal chemistry . As stated above, cardiac problems are among the most complex in medicine , and there is a clear trend toward them becoming more important . Identifying potential human ether-a-go-go-related gene (hERG) potassium channel blockers is an important part of drug discovery and checking up on drug safety processes in pharmaceutical industries and academic drug discovery centers . The most popular idea at present is considered to begin the corresponding searches with the choice of idealization (a certain success), that is a molecule that absorbs the preferred qualities in the complete form. In order to prioritize molecules during the early drug discovery phase and to reduce the risk of the necessity of an additional preliminary checking-up of pharmaceutical agents, computational approaches have been developed to predict the potential of hERG blockage of new drug candidates. In other words, estimating the cardiac toxicity of organic hERG blockers is an important theoretical and practical task of medicinal chemistry. Potential hERG inhibitors must be identified for drug discovery and safety ; however, the experimental analysis of all potential hERG inhibitors is impossible because there are so many of them. Computational models for the cardiac toxicity of organic hERG blockers are an attractive alternative to real experiments . Quantitative structure–activity relationships (QSAR) are common computational approaches . Such models can be obtained using machine learning based on graph theory, support vector machine, random forest, artificial neural networks, and other approaches . CORAL software ( http://www.insilico.eu/coral , accessed on 1 September 2023) is a tool for building up QSAR models for various endpoints with the Monte Carlo method . The CORAL software was recently updated with what is called the index of ideality of correlation ( IIC ) and the correlation intensity index ( CII ) . IIC and CII are indicators of the predictive potential of QSAR. IIC differs from other criteria of the statistical quality of linear regression models with a unique ability since it is a measure that is sensitive both to the value of the correlation coefficient and to the value of the mean absolute error (MAE). In principle, CII has some analogy with the known cross-validation measures, but this analogy is partial. While the traditional cross-validated test is based on averaging the difference between the correlation coefficient before and after the “removing” of molecules from the set (training, calibration, or testing), the CII considers the average value of the difference observed removing only molecules which reduce the correlation coefficient in the set. Here, the ability to improve the predictive potential of cardiotoxicity models using the IIC and CII is studied. 2.1. QSAR Models Based on TF 1 The Monte Carlo optimization with the target function TF 1 for three random splits (#1, #2, and #3) provides the following models: pIC 50 = 3.456(±0.058) + 0.0632(±0.0019) × DCW (1, 15) (1) pIC 50 = 4.082(±0.019) + 0.1498(±0.0025) × DCW (1, 15) (2) pIC 50 = 2.618(±0.140) + 0.1654(±0.0090) × DCW (1, 35) (3) provides the statistical quality of these models. 2.2. QSAR Models Based on TF 2 The Monte Carlo optimization with the target function TF 2 for three random splits (#1, #2, and #3) provides the following models: pIC 50 = 3.202(±0.068) + 0.0867 (±0.0027) × DCW (1, 15) (4) pIC 50 = 3.036 (±0.084) + 0.0730 (±0.0028) × DCW (1, 15) (5) pIC 50 = 3.490(±0.077) + 0.0803 (±0.0035) × DCW (1, 15) (6) provides the statistical quality of these models. The average value of the coefficient of determination for these models is about 0.6 (for the set as a whole). However, there is a paradox described earlier . The influence of the IICc leads to an improvement in the coefficient of determination for the calibration and the validation sets but not to the detriment of training sets, where the coefficient of determination is lower. compares models calculated with the target functions TF 1 and TF 2 . Models calculated using TF 2 are preferred since confirms that the average determination coefficient values of TF 2 -models are larger than those of TF 1 -models for all three random splits. Williams plots for all considered models indicated that there are practically no outliers for both TF 1 -models and TF 2 -models . 1 The Monte Carlo optimization with the target function TF 1 for three random splits (#1, #2, and #3) provides the following models: pIC 50 = 3.456(±0.058) + 0.0632(±0.0019) × DCW (1, 15) (1) pIC 50 = 4.082(±0.019) + 0.1498(±0.0025) × DCW (1, 15) (2) pIC 50 = 2.618(±0.140) + 0.1654(±0.0090) × DCW (1, 35) (3) provides the statistical quality of these models. 2 The Monte Carlo optimization with the target function TF 2 for three random splits (#1, #2, and #3) provides the following models: pIC 50 = 3.202(±0.068) + 0.0867 (±0.0027) × DCW (1, 15) (4) pIC 50 = 3.036 (±0.084) + 0.0730 (±0.0028) × DCW (1, 15) (5) pIC 50 = 3.490(±0.077) + 0.0803 (±0.0035) × DCW (1, 15) (6) provides the statistical quality of these models. The average value of the coefficient of determination for these models is about 0.6 (for the set as a whole). However, there is a paradox described earlier . The influence of the IICc leads to an improvement in the coefficient of determination for the calibration and the validation sets but not to the detriment of training sets, where the coefficient of determination is lower. compares models calculated with the target functions TF 1 and TF 2 . Models calculated using TF 2 are preferred since confirms that the average determination coefficient values of TF 2 -models are larger than those of TF 1 -models for all three random splits. Williams plots for all considered models indicated that there are practically no outliers for both TF 1 -models and TF 2 -models . The models’ advantage is their user-friendliness since their implementation requires only SMILES and numerical data for an endpoint without any other descriptors. There are special rules to define the mechanistic interpretation as well as the applicability domain. The described approach provides models following OECD principles . The main essence of the above document concentrated on the well-known five OECD principles is descripted below: A defined endpoint; An unambiguous algorithm; A defined applicability domain; Appropriate measures of goodness-of-fit, robustness, and predictivity; A mechanistic interpretation, if possible. lists the molecular features of statistically stable promoters of an increase or decrease of the pIC 50. These data are selected according to the following: (i) Molecular features extracted from SMILES or HSG with significant prevalence in the training and calibration sets; (ii) Molecular features which have positive correlation weights (CW) for all three runs of the Monte Carlo optimization; (iii) Molecular features with negative CW for all three runs of the Monte Carlo optimization. There are stable promoters of the pIC 50 increase related to all distributions. For instance, the promoters of an increase of pIC 50 are the presence of nitrogen connected with carbon when Morgan extended connectivity of carbon atoms is equal to 5, 6, and 7 or Morgan extended connectivity of nitrogen atoms equals 4. In contrast, promoters of decrease of pIC 50 are vertex degrees of carbon atoms equal to 2 or 3 and degrees of nitrogen atoms equal to 2. Some other features become promoters of an increase or decrease of cardiotoxicity . contains examples of influence promoters of increase (C…=…….) and decrease (C………..) to the calculated cardiotoxicity values. The comparison of the statistical quality of the models using the target functions TF 1 and TF 2 presented in and indicates that TF 2 provides better results. contains the comparison of models for cardiotoxicity suggested in the literature. The best model is observed for TF 1 (split-1); however, the results for the other two splits in the case of the TF 1 -model are worse, and the variance of the coefficient of determination for the validation set is significant. In contrast, the average value of the coefficient of determination for the validation set in the case of the TF 2 -model is more significant, and the variance is less than those in the case of the TF 1 -model. Thus, despite the excellent result for split-1 with the TF 1 -model, on the whole, TF 2 -model is the preferable model. The above-mentioned information allows us to state that the proposed models correspond to the five generally mentioned recognized principles of constructing a QSPR/QSAR model. However, it seems appropriate to dwell on a number of features of the considered method. A very useful feature of the approach under consideration is its significant heuristic potential due to the possibility of approximately formulating statistical hypotheses as follows: - Whether (and if so, how much) the considered endpoint depends on the representation of molecules using SMILES; - Whether (and if so, to what extent) the considered endpoint depends on the representation of molecules using graphs; - Whether the representation of the molecular features extracted from SMILES and the graph provide a synergetic effect (i.e., improving the predictive potential of a model in the comparison of the separate cases considering the SMILES-based model and graph-based model); - Whether IIC improves the predictive potential of models based on SMILES-based representation of molecules; - Whether IIC improves the predictive potential of models based on a graph-based representation of molecules; - Whether CII improves the predictive potential of models based on SMILES-based representation of molecules; - Whether CII improves the predictive potential of models based on a graph-based representation of molecules; - Whether the combined use of IIC and CII has a synergistic effect, that is, whether observed improvement of the predictive potential of models occurs if applying IIC and CII together compared to the cases of using IIC and CII separately. In principle, the list of similar hypotheses that can be formulated and, accordingly, tested within the framework of the approach under consideration, can also be expanded. However, it seems more appropriate to consider the mentioned possibilities, providing them with brief explanations. In fact, only a part of the hypotheses listed above is considered here. The results can be formulated as follows: 1. The combined use of correlation weighting of SMILES attributes and graph invariants improves the predictive potential of the hERG inhibition model expressed as pIC 50; 2. For the considered compounds, the use of CII provides a better predictive potential than that of models built using IIC ; 3. The observed statistical results for the three random splits of the available connections in the training and control sets are in good agreement with each other. Are there valuable models? If there are “valuable” models, then there must be models that are not “valuable”. How to distinguish valuable models from not very valuable ones? It has been stated that “All models are wrong, but some are useful” . Thus, how to distinguish useful models from a set of wrong ones? The reproducibility of results and their clarity (graphical representation ) are most likely the main features of the utility model. In this paper, for this purpose, attempts were made to build several models using different splits. The development of criteria for the predictive potential of models is also part of the research designed to identify useful models. In this paper, for this purpose, attempts were made to compare two new criteria for the predictive potential of the model, the IIC ( TF 1 ) and the CII ( TF 2 ). One can extract two basic components in the total large variety of QSAR studies: (i) “applicative” studies and (ii) “theoretical” studies. “Applicative” studies aim to integrate the results of applying current approaches to solve practical tasks. “Theoretical” studies aim to attempt to develop new conceptions of the QSPR/QSAR analysis. This study contains both applicative and theoreatical parts. On the one hand, here, the Monte Carlo optimization technique described in the literature is aimed to build up (almost) standard models (applicative part). On the other hand, new criteria of the predictive potential are studied (theoretical part). Thus, the epistemological aspect of the provided QSAR research, here, is presented in the form of confirmation of two statements. First, all QSAR models are random events if they are built using random distributions in training and validation sets. Second, the usefulness of random QSAR models can be stated if the variance in the values of statistical characteristics is acceptably small. The section contains the technical details related to the described approach. 4.1. Data The numerical data on 113 piperidine derivatives (pyridine-substituted piperidines, tertiary alcohol-bearing piperidines, spirocyclic piperidines, and isoxazole-containing piperidines) were taken from the literature . The activity is expressed as −logIC50 or pIC 50 . The set of compounds is split into (i) active training (≈25%), (ii) passive training (≈25%), (iii) calibration (≈25%), and (iv) validation sets (≈25%). Each set has a defined task. The active training set is used to build the model; molecular features extracted from the simplified molecular-input line-entry system (SMILES—which represents the structure) , of the active training set are involved in the Monte Carlo optimization to provide correlation weights for the above features, which provide the largest target function value on the active training set. The passive training checks whether the model for the active training set is satisfactory for SMILES that were not involved in the active training set. The calibration set should detect when overtraining (overfitting) starts. The validation set provides the possibility to assess the predictive potential of a model since the data from the validation set is unknown while building up a model. Our experience with CORAL shows that equal distribution over the four sets mentioned is likely the most rational strategy. At the beginning of the optimization, the correlation coefficients between the experimental values of the endpoint and the descriptor simultaneously increase for all sets, but the correlation coefficient for the calibration set reaches a maximum; this is the start of overtraining, and further optimization leads to a decrease of the correlation coefficient for the calibration set. Optimization should be stopped when overtraining starts. After stopping the Monte Carlo optimization procedure, the validation set is needed to assess the model’s predictive potential. 4.2. Optimal Descriptor The optimal descriptor, calculated with the representation of the molecular structure using the SMILES, 37serves as the basis of a model for cardiotoxicity. The optimal descriptor for the predictive model of the endpoint is calculated with Equation (7): (7) p I C 50 = C 0 + C 1 × D C W T , N (8) D C W T , N = ∑ k = 1 N A C W ( S k ) + ∑ k = 1 N A − 1 C W ( S S k ) where T is an integer that separates molecular features extracted from SMILES into rare and non-rare ones. The non-rare features serve to build up the model. The rare features are not used to build up the model. N is the number of epochs in the optimization of the correlation weights. Sk is a SMILES atom, i.e., one SMILES line symbol (e.g., ‘=’, ‘O’) or a group of symbols that cannot be examined separately (e.g., ‘Cu’, ‘%11’). SSk is a couple of SMILES atoms. CW ( S k ) and CW ( SS k ) are the correlation weights of the SMILES attributes ( SAk ). NA is the number of non-rare SMILES attributes. The contains an example of the DCW(1, 15) calculation. 4.3. Monte Carlo Optimization Equation (2) needs the numerical data on the above correlation weights. Monte Carlo optimization is employed to calculate the correlation weights. Here, two target functions for the Monte Carlo optimization are examined: (9) T F 0 = r A T + r P T − r A T − r P T × 0.1 (10) T F 1 = T F 0 + I I C C × 0.5 (11) T F 2 = T F 1 + C I I C × 0.5 Equation (3) is defined empirically during the development of many different models. Variables r A T and r P T are correlation coefficients between the observed and predicted values of the endpoint for the active training set and passive training set, respectively. IIC C is the index of ideality of correlation . IIC C is calculated with data on the calibration set as follows: (12) I I C C = r C m i n ( MAE C − , MAE C + ) m a x ( MAE C − , MAE C + ) (13) min x , y = x , i f x < y y , o t h e r w i s e (14) max x , y = x , i f x > y y , o t h e r w i s e (15) MAE C − = 1 N − ∑ Δ k , N − i s t h e n u m b e r o f Δ k < 0 (16) MAE C + = 1 N + ∑ Δ k , N + i s t h e n u m b e r o f Δ k ≥ 0 (17) Δ k = o b s e r v e d k − c a l c u l a t e d k The corresponding values of the endpoint are observed and calculated. The correlation intensity index ( CII ), similar to the IIC , was developed to improve the quality of the Monte Carlo optimization used to build up QSPR/QSAR models. CII is calculated as follows: (18) C I I C = 1 − ∑ P r o t e s t k (19) P r o t e s t k = R k 2 − R 2 , i f R k 2 − R 2 > 0 0 , o t h e r w i s e where R 2 is the correlation coefficient for a set that contains n substances. R 2 k is the correlation coefficient for n − 1 substances of a set after removing the k -th substance. If ( R 2 k − R 2 ) is larger than zero, the k -th substance is an “opponent” for the correlation between the experimental and predicted values of the set. A small sum of “protests” means a more “intense” correlation. 4.4. Applicability Domain The described models’ applicability domain defines the “statistical defects” of molecular features extracted from SMILES or HSG. These are calculated as follows: (20) d k = P ( A k ) − P ′ ( A k ) N A k + N ′ A k + P ( A k ) − P ″ ( A k ) N A k + N ″ A k + P ′ ( A k ) − P ″ ( A k ) N ′ A k + N ″ A k where P ( A k ), P ′( A k ), and P ″( A k ) are the probability of A k in the active training, passive training, and calibration sets, respectively; N ( A k ), N ′( A k ), and N ″( A k ) are frequencies of A k in the active training, passive training, and calibration sets, respectively. The statistical SMILES defects ( D j ) are calculated as follows: (21) D j = ∑ k = 1 N A d k where NA is the number of non-blocked SMILES attributes in the SMILES. A SMILES falls in the applicability domain if (22) D j < 2 D ¯ where D ¯ is the average statistical defect on all compounds. The numerical data on 113 piperidine derivatives (pyridine-substituted piperidines, tertiary alcohol-bearing piperidines, spirocyclic piperidines, and isoxazole-containing piperidines) were taken from the literature . The activity is expressed as −logIC50 or pIC 50 . The set of compounds is split into (i) active training (≈25%), (ii) passive training (≈25%), (iii) calibration (≈25%), and (iv) validation sets (≈25%). Each set has a defined task. The active training set is used to build the model; molecular features extracted from the simplified molecular-input line-entry system (SMILES—which represents the structure) , of the active training set are involved in the Monte Carlo optimization to provide correlation weights for the above features, which provide the largest target function value on the active training set. The passive training checks whether the model for the active training set is satisfactory for SMILES that were not involved in the active training set. The calibration set should detect when overtraining (overfitting) starts. The validation set provides the possibility to assess the predictive potential of a model since the data from the validation set is unknown while building up a model. Our experience with CORAL shows that equal distribution over the four sets mentioned is likely the most rational strategy. At the beginning of the optimization, the correlation coefficients between the experimental values of the endpoint and the descriptor simultaneously increase for all sets, but the correlation coefficient for the calibration set reaches a maximum; this is the start of overtraining, and further optimization leads to a decrease of the correlation coefficient for the calibration set. Optimization should be stopped when overtraining starts. After stopping the Monte Carlo optimization procedure, the validation set is needed to assess the model’s predictive potential. The optimal descriptor, calculated with the representation of the molecular structure using the SMILES, 37serves as the basis of a model for cardiotoxicity. The optimal descriptor for the predictive model of the endpoint is calculated with Equation (7): (7) p I C 50 = C 0 + C 1 × D C W T , N (8) D C W T , N = ∑ k = 1 N A C W ( S k ) + ∑ k = 1 N A − 1 C W ( S S k ) where T is an integer that separates molecular features extracted from SMILES into rare and non-rare ones. The non-rare features serve to build up the model. The rare features are not used to build up the model. N is the number of epochs in the optimization of the correlation weights. Sk is a SMILES atom, i.e., one SMILES line symbol (e.g., ‘=’, ‘O’) or a group of symbols that cannot be examined separately (e.g., ‘Cu’, ‘%11’). SSk is a couple of SMILES atoms. CW ( S k ) and CW ( SS k ) are the correlation weights of the SMILES attributes ( SAk ). NA is the number of non-rare SMILES attributes. The contains an example of the DCW(1, 15) calculation. Equation (2) needs the numerical data on the above correlation weights. Monte Carlo optimization is employed to calculate the correlation weights. Here, two target functions for the Monte Carlo optimization are examined: (9) T F 0 = r A T + r P T − r A T − r P T × 0.1 (10) T F 1 = T F 0 + I I C C × 0.5 (11) T F 2 = T F 1 + C I I C × 0.5 Equation (3) is defined empirically during the development of many different models. Variables r A T and r P T are correlation coefficients between the observed and predicted values of the endpoint for the active training set and passive training set, respectively. IIC C is the index of ideality of correlation . IIC C is calculated with data on the calibration set as follows: (12) I I C C = r C m i n ( MAE C − , MAE C + ) m a x ( MAE C − , MAE C + ) (13) min x , y = x , i f x < y y , o t h e r w i s e (14) max x , y = x , i f x > y y , o t h e r w i s e (15) MAE C − = 1 N − ∑ Δ k , N − i s t h e n u m b e r o f Δ k < 0 (16) MAE C + = 1 N + ∑ Δ k , N + i s t h e n u m b e r o f Δ k ≥ 0 (17) Δ k = o b s e r v e d k − c a l c u l a t e d k The corresponding values of the endpoint are observed and calculated. The correlation intensity index ( CII ), similar to the IIC , was developed to improve the quality of the Monte Carlo optimization used to build up QSPR/QSAR models. CII is calculated as follows: (18) C I I C = 1 − ∑ P r o t e s t k (19) P r o t e s t k = R k 2 − R 2 , i f R k 2 − R 2 > 0 0 , o t h e r w i s e where R 2 is the correlation coefficient for a set that contains n substances. R 2 k is the correlation coefficient for n − 1 substances of a set after removing the k -th substance. If ( R 2 k − R 2 ) is larger than zero, the k -th substance is an “opponent” for the correlation between the experimental and predicted values of the set. A small sum of “protests” means a more “intense” correlation. The described models’ applicability domain defines the “statistical defects” of molecular features extracted from SMILES or HSG. These are calculated as follows: (20) d k = P ( A k ) − P ′ ( A k ) N A k + N ′ A k + P ( A k ) − P ″ ( A k ) N A k + N ″ A k + P ′ ( A k ) − P ″ ( A k ) N ′ A k + N ″ A k where P ( A k ), P ′( A k ), and P ″( A k ) are the probability of A k in the active training, passive training, and calibration sets, respectively; N ( A k ), N ′( A k ), and N ″( A k ) are frequencies of A k in the active training, passive training, and calibration sets, respectively. The statistical SMILES defects ( D j ) are calculated as follows: (21) D j = ∑ k = 1 N A d k where NA is the number of non-blocked SMILES attributes in the SMILES. A SMILES falls in the applicability domain if (22) D j < 2 D ¯ where D ¯ is the average statistical defect on all compounds. The suggested approach provides reliable cardiotoxicity models since their predictive potential is confirmed for three random splits into training and validation sets. The Monte Carlo optimization with the target function TF 2 calculated with the correlation intensity index (Equation (11)) is more accurate and more reliable than optimization with the index of ideality of correlation, i.e., the target function TF 1 (Equation (10)). illustrates the simplicity of applying the model for comparison of the potential biological activity of different molecules. In fact, such analysis can be a tool for the preliminary assessment of biological activity only on the basis of a set of Monte Carlo computation experiments with different distributions of available data in the training and validation (test) sets. |
Rhinomodelation With Polycarpolactone—A Safer and Effective Solution for the Future | 5edc0869-126a-40a8-ad17-e3ab966105f3 | 11804163 | Surgical Procedures, Operative[mh] | Introduction Aesthetic surgery is one of the most evolving fields in today's era. According to the International Society of Aesthetic Plastic Surgery data 2023, Rhinoplasty is one of the most popular procedures with 1.1 million procedures and a 21.6% increase over the preceding year. In the nonsurgical procedures dermal fillers were 5.5 million with a 29% increase . Surgical rhinoplasty though having over 80% success rate if performed by an experienced surgeon, still has the fear of going under a knife thus making nonsurgical rhinoplasty a sought‐after procedure . Nonsurgical rhinoplasty (NSR) offers an alternative to traditional surgical nose procedures. It is a quick and efficient way of addressing minor defects in the shape of the nose. In the recent past, a lot of plastic surgeons have started using it post rhinoplasty to address the post‐surgical complications and expectations of patients. The use of nonsurgical techniques has gained popularity over time. As far as nonsurgical or medical rhinoplasty is concerned, the results are positive . An important consideration is the patient satisfaction and long‐term complications like breathing difficulties . The patient satisfaction towards nonsurgical rhinoplasty is good . The traditional hyaluronic based fillers have been used for decades for nonsurgical rhinoplasty and have a good result. The problem with the HA fillers is that they have a limited lifespan. Another problem encountered is “Filler Migration.” Though under reported, it is possible that nose fillers placed at the bridge may spread laterally and make the nose look wider over time. Filler migration is usually the result of over‐ treating, poor technique, and incorrect product choice. The use of glasses can also push the filler downwards over time. Chae et al. reported two sequential lumps due to migration of filler. One of these lumps was in the forehead despite being injected in the nose (confirmed by histopathology). Furthermore, it has been documented that filler migration can occur early or late after deep soft tissue and/or supraperiosteal HA administrations. Unwanted results in terms of appearance and functionality might result from this situation . Polycaprolactone (PCL) is a collagen stimulator that holds a lot of potential in the field of aesthetics. With its unique property of stimulating collagen, it has lesser risk of migration. The bio stimulatory dermal fillers have proven efficacy and superiority to hyaluronic fillers in areas like the nasolabial folds . It is proven through studies that PCL is completely excreted out of the body . Materials and Methods This study aimed to assess the efficacy and aesthetic outcomes of PCL filler, Ellansé (Sinclair Pharmaceuticals, London, UK) in nonsurgical rhinoplasty. Twelve patients who were fit for the nonsurgical approach were selected. Two cases out of the Twelve were of post‐surgical rhinoplasty complications and were not willing for a surgical intervention again. All the cases were those who refused the surgical approach and opted for nonsurgical rhinomodelation. The patients were in the age group 20–35. The patients were followed up for a period of 12 months to assess the aesthetic outcome and patient satisfaction. The enrollment period was December 2023. 2.1 Pre‐Treatment Every patient underwent a thorough examination, which included a review of their medical history and a list of their current medications, before beginning treatment. The age group selected for the study was 18–50 years. Patients having a history of nasal problems were excluded of the study. Patients were asked about their allergies in general and allergy to hyaluronic or PCL fillers in particular. Detailed history was taken for cosmetic procedures. Patients having previous history of nonsurgical rhinoplasty were included in the study with a condition that the previous procedure should have been done at least 1 year ago. Patients in whom surgical rhinoplasty had failed and they were either not eligible or not willing for revised surgical intervention were included in the study. Patients suspected to be suffering from body dysmorphic disorder were excluded and so were patients with unrealistic expectations. To ascertain each patient's requirement retreatment photos of the nose were acquired. All participants signed and dated the study consent form and received patient information before beginning the study. Copies of the signed documents were given to the subjects, and the originals were stored in the subject's file. Ten patients out of the Twelve were reluctant to go for surgical rhinoplasty and opted for the nonsurgical option. Two patients were post‐rhinoplasty complications and were not willing for surgical revision again. patients had post‐rhinoplasty. The risks and benefits of using PCL were explained in detail to all the patients. The Global Aesthetic Improvement Scale (GAIS; 3 = very much improved, 2 = considerably improved, 1 = improved, 0 = no change, and −1 = worse) was used to assess the outcome. The sum of the GAIS ratings was quantified as Total improvement (TMI). The GAIS was done by three independent evaluators (a dermatologist and two plastic surgeons) through photographic assessment. The photographs were taken before, after, and at 12 months after the procedure. The GAIS score was again calculated by comparing the pictures before and 12 months after the procedure. The patients were asked to do a self‐assessment of the result through a questionnaire. They were asked if they were dissatisfied, satisfied, or highly satisfied (1 being highly satisfied and 3 being dissatisfied) right after the procedure and on 12th month when they came for follow‐up. 2.2 Injection Technique The same medical professional (dermatologist in this case) used the same filler for each injection, Ellanse M in these cases. The procedure started with the application of a topical anesthetic to the nasal region, followed by a 15‐min wait. To begin the process, the target location was first made numb and then cleaned using an antiseptic fluid. The PCL filler was injected by the prefilled syringe provided by the manufacturer. The needle used for the injection was 27 G × ¾″ manufactured by Terumo, Europe provided and recommended by the filler manufacturer. Two nasal regions—the radix and nasal dorsum were given PCL filler injections based on the patient's demands. When employing the injection technique, a maximum of 0.2 cc of PCL was administered at each injection site. Supra periosteal for the radix and dorsum. Total maximum of 0.4 mL of PCL filler was injected. 0.1–0.2 mL bolus of the PCL filler was injected perpendicularly to the level of the periosteum. Further if needed two boluses were injected at 3 mm from the first one in the cranio‐caudal direction. 2.3 Post Procedure Care A steri‐strip was placed on both sides of the nose for a period of 2 days. The patients were advised not to massage the area and not to wear any glasses for a period of 2–3 weeks. They were explained that redness and swelling in the injected area may be there. Patients were advised to take tablet Paracetamol for the pain. Patients were asked to refrain from ibuprofen and aspirin because of the chance of increase in bruising. They were further advised to contact in case of pain that was not resolved by pain killer, excessive swelling, or any skin change in the adjacent area. Further instructions included no massaging the area, sun and heat exposure, smoking, alcohol, sauna, and swimming for the first 24 h. Patients were recommended to refrain from wearing sunglasses, goggles, or reading glasses for a period of 15 days to avoid pressure on the injected area. Pre‐Treatment Every patient underwent a thorough examination, which included a review of their medical history and a list of their current medications, before beginning treatment. The age group selected for the study was 18–50 years. Patients having a history of nasal problems were excluded of the study. Patients were asked about their allergies in general and allergy to hyaluronic or PCL fillers in particular. Detailed history was taken for cosmetic procedures. Patients having previous history of nonsurgical rhinoplasty were included in the study with a condition that the previous procedure should have been done at least 1 year ago. Patients in whom surgical rhinoplasty had failed and they were either not eligible or not willing for revised surgical intervention were included in the study. Patients suspected to be suffering from body dysmorphic disorder were excluded and so were patients with unrealistic expectations. To ascertain each patient's requirement retreatment photos of the nose were acquired. All participants signed and dated the study consent form and received patient information before beginning the study. Copies of the signed documents were given to the subjects, and the originals were stored in the subject's file. Ten patients out of the Twelve were reluctant to go for surgical rhinoplasty and opted for the nonsurgical option. Two patients were post‐rhinoplasty complications and were not willing for surgical revision again. patients had post‐rhinoplasty. The risks and benefits of using PCL were explained in detail to all the patients. The Global Aesthetic Improvement Scale (GAIS; 3 = very much improved, 2 = considerably improved, 1 = improved, 0 = no change, and −1 = worse) was used to assess the outcome. The sum of the GAIS ratings was quantified as Total improvement (TMI). The GAIS was done by three independent evaluators (a dermatologist and two plastic surgeons) through photographic assessment. The photographs were taken before, after, and at 12 months after the procedure. The GAIS score was again calculated by comparing the pictures before and 12 months after the procedure. The patients were asked to do a self‐assessment of the result through a questionnaire. They were asked if they were dissatisfied, satisfied, or highly satisfied (1 being highly satisfied and 3 being dissatisfied) right after the procedure and on 12th month when they came for follow‐up. Injection Technique The same medical professional (dermatologist in this case) used the same filler for each injection, Ellanse M in these cases. The procedure started with the application of a topical anesthetic to the nasal region, followed by a 15‐min wait. To begin the process, the target location was first made numb and then cleaned using an antiseptic fluid. The PCL filler was injected by the prefilled syringe provided by the manufacturer. The needle used for the injection was 27 G × ¾″ manufactured by Terumo, Europe provided and recommended by the filler manufacturer. Two nasal regions—the radix and nasal dorsum were given PCL filler injections based on the patient's demands. When employing the injection technique, a maximum of 0.2 cc of PCL was administered at each injection site. Supra periosteal for the radix and dorsum. Total maximum of 0.4 mL of PCL filler was injected. 0.1–0.2 mL bolus of the PCL filler was injected perpendicularly to the level of the periosteum. Further if needed two boluses were injected at 3 mm from the first one in the cranio‐caudal direction. Post Procedure Care A steri‐strip was placed on both sides of the nose for a period of 2 days. The patients were advised not to massage the area and not to wear any glasses for a period of 2–3 weeks. They were explained that redness and swelling in the injected area may be there. Patients were advised to take tablet Paracetamol for the pain. Patients were asked to refrain from ibuprofen and aspirin because of the chance of increase in bruising. They were further advised to contact in case of pain that was not resolved by pain killer, excessive swelling, or any skin change in the adjacent area. Further instructions included no massaging the area, sun and heat exposure, smoking, alcohol, sauna, and swimming for the first 24 h. Patients were recommended to refrain from wearing sunglasses, goggles, or reading glasses for a period of 15 days to avoid pressure on the injected area. Results Results of this study demonstrate that medical remodeling of the nose using a PCL filler is a safe, reliable technique with high patient satisfaction (Tables , , and Figures , , , , , ). In the current study GAIS evaluations was done by three independent evaluators. Two of these were plastic surgeons and one a dermatologist. The assessment of all three was 90% for the treated patient after 12 months follow‐up. Right after the procedure the assessment was 80% (Table ). The patient assessment questionnaire revealed that all patients were very satisfied with their results both post right after the procedure and 12 months after the procedure (Tables and ). Volume range of the PCL filler injected was between 0.2 and 0.4 mL. The exact quantity injected is shown in Figures , , , , , . In all the cases PCL was placed deeply, over bony, and cartilage tissue, just above the periosteum and/or the perichondrium; this was done to avoid vessels cannulation and related vascular problems. Discussion Heden et al. did an analysis of over 250 individuals who had HA treatment for nose contouring since 1997. Moreover, HA injection effectively treated nasal deformities that would have been challenging to fix surgically, serving as a supplement to surgery. With some patients experiencing longer durations, the effect lasted for more than a year (up to 5 years). Presently available on the market are a variety of fillers, each with unique properties. But aside from HA or other options, we think PCL is a good choice with clear benefits in several areas and indications. It should be mentioned that most HA products—especially the ones that stay longer—are made of highly reticulated HA. There is still much to learn about these devices' long‐term toxicity . The reason for choosing polycaprolactone (PCL) is that it has a biocompatible, biodegradable, bioresorbable, and original cellular response polymer demonstrating long‐term efficacy and duration of action . It has been shown that PCL is completely excreted from the body. It is used in daily clinical practice and prevention and treatment recommendations are well defined both through evidence and experience. First synthesized in the 1930s, there is enough evidence in favor of its better viscoelastic properties than other biodegradable polymers . The design of the filler has shown a better safety profile and longer lasting effect than collagens and free hyaluronic acids. In comparison to Sculptra (Poly L‐Lactic Acid) (Galderma Laboratories, USA) and Radidesse (Calcium Hydroxapetite) (Merz North America, USA), the PCL‐based collagen stimulator, Ellansé, has the longest action duration . It has been used for subdermal implantation for long‐lasting facial wrinkle correction and hand rejuvenation, since approval in 2009. The PCL filler is composed of regular PCL round‐microspheres suspended in a tailor‐made aqueous CMC gel carrier. Both immunohistochemistry studies and quantitative and qualitative studies using Vectra 3D have shown that the effect of PCL fillers may last up to a period of 2 years . On the other hand CAHA based fillers have shown limited clinical efficacies. Studies show a 6–24 months effect which brings a considerable doubt towards the long‐term efficacy . 4.1 PCL Resorption Mechanism When thinking about biopolymers, it's crucial to remember that anything that is biodegradable does not always mean that it is bioresorbable; in other words, just because something breaks down and travels away from its site of action in vivo, it does not always mean that the body gets rid of it. On the other hand, the idea of bioresorbability refers to the complete removal of the original foreign materials and bulk breakdown products by‐products (low molecular weight molecules) with no lingering adverse effects . When water enters the microspheres, the ester linkages throughout the whole polymer matrix gradually hydrolyze, leading to a bulk degradation process that is characteristic of PCL‐filler degradation by hydrolysis. First, the mass, volume, and form of the implant stay constant over time, but the length and molecular weight of the polymer chain drop. Next, diffusion of the tiny polymer fragments occurs after hydrolysis has created low molecular weight chains. In the microspheres, PCL shows both amorphous and crystalline parts; the amorphous portions are more readily hydrolyzed than the crystalline regions. Longevity of the microspheres ultimately depends on the hydrolytic disintegration of PCL crystalline areas. The chain length (molecular weight) of the Ellansé range items is what sets them apart. The PCL monomers (6‐hydroxycaproic acid) undergo a series of reactions that end in the TCA cycle . 4.2 PCL Mechanism of Action The two components of the filler are CMC and PCL. The PCl microspheres are suspended in a customized aqueous CMC gel carrier. The CMC gel leads to an immediate effect and the PCL leads to a sustained effect . The sustained effect is sustained by collagen production and the 3D‐scaffold formed, preventing cluster formation and prolonging the effect. Collagen, the most predominant protein in the human body and skin, plays a crucial role in the extracellular matrix (ECM), and skin changes. The PCL filler, a collagen stimulator, has been shown to increase collagen production in both animals and humans hence, inducing a tissue repair process through inflammation, proliferation, and remodeling . The long‐term efficacy and duration of action has been shown through clinical studies. It has been proven that a PCL‐based collagen stimulator can improve facial volume, forehead augmentation, and skin quality . The long‐term efficacy of the PCL filler is demonstrated in volume restoration, contour redefinition, skin rejuvenation, skin quality, and wrinkle reduction . Galadari et al. in their study showed the superiority of PCL fillers over HA fillers over the period of 12 months. A randomized, split faced study clearly concluded in favor of PCL‐based dermal fillers. 4.3 PCL Safety Profile Clinical studies have shown that the PCL filler has shown no serious adverse events globally. No granuloma or vascular complications has been reported. However, like all other fillers there may be some injection related responses. These being mainly edema or ecchymosis, which are usually mild and resolved within a few days without intervention . In a European clinical study, injection site reactions were similar to any filler, with swelling and bruising disappearing in 2–4 days. The long‐term safety of the PCL filler was confirmed in a clinical study following safety up to 30 months . From 2009 to December 2017, 355 adverse events were reported, with a low adverse event rate of 0.056% . A review from launch to December 2020 found the adverse event rate to be 0.0462% or 1% in 2615 syringes . The overall safety profile of the PCL‐based collagen stimulator is good. However, there have been cases of late reactions, granuloma, discoloration, and xanthelasma‐like reactions. A review of complications in Korea showed good safety in 780 treated subjects from April 2015 to May 2018, with edema and bruising being the most common . Overall, the PCL filler has shown good long‐ term efficacy, long action duration, and safety in aesthetic treatment. 4.4 Complication Prevention and Treatment: Recommendations Physicians must be aware of the product characteristics to avoid adverse events. An important consideration should be given to right patient selection, proper aseptic conditions, and the right injection technique . Preventing and managing complications of PCL fillers is essential for optimal aesthetic outcomes. Following strict rules and general recommendations is crucial for PCL‐based collagen stimulator treatment. Prevention measures have been discussed in various articles and expert reviews . 4.5 Prevention of Adverse Events Pre‐procedural, post‐procedural, and post‐procedural care Proper selection of patients Thorough patient assessment, understanding of expectations, and aesthetic assessment Right product selection Strict adherence to Instructions for Use (IFU) Aseptic conditions to prevent infection Recognized and trained physicians should inject Knowing and focusing on anatomy, product characteristics, and injection techniques. Proper advice on post‐procedural care 4.6 Management of Adverse Event: Recommendations Dermal fillers are a common treatment option for a variety of conditions, but severe adverse events are rare. To manage these effects, physicians must localize and recognize the culprit product. Knowing the anatomy and blood supply of any area to be injected is mandatory . It is necessary to rule out whether the PCL filler is causing the event or not. The injected product, whether PCL or HA filler or any other product should be considered Guilty until proven innocent. Recent addition of ultrasound in aesthetics, can help identify the filler, improve diagnosis, and guide treatment. The most common minor side effects are swelling/edema and the rare ones are nodules/lumps and granulomas. Christen et al. in their study have provided a comprehensive guideline to manage complications with PCL fillers. Swelling/edema is a normal inflammatory reaction to the trauma caused by the injection or large volume injected, which should disappear within 5–7 days. Prophylaxis with anti‐inflammatory enzymes, arnica/gelsemium, cold compresses, and anti‐inflammatory drugs are recommended for moderate cases. Persistent edema localized at the treated zone that lasts more than 7 days until 2 weeks is logically based on oral corticosteroids. For dermal fillers in general, long‐lasting malar edema responds poorly to treatment, and experts recommend prevention. Nodules should not be confused with granulomas, as they are noninflammatory and hard, localized at the injection sites, pea‐shaped, and not increasing in size. Treatment depends on the time of onset and a wait and see attitude is recommended. Nodules/lumps occurring early after injection are generally related to a technical error, and intralesional microinjection of corticosteroids is the standard treatment. Treatment generally needs to be repeated. Inflammatory nodules/granuloma are rare but severe adverse events that can occur 6–24 months after injection. They are a secondary late‐onset chronic inflammatory reaction with varying etiology, occurring 6–24 months after injection. Treatment is based on intralesional corticosteroids (high‐dose triamcinolone mixed with lidocaine and 5‐fluorouracil) to prevent recurrence and skin atrophy. Oral corticosteroids are often associated with recurrent granuloma, but surgical therapy is a last resort due to the difficulty of removing the granuloma completely and the risk of infection and scars. The PCL filler is not known to be associated with granuloma, but it is important to provide information on the treatment recommended by a group of experts. The treatment described in detail starts by systemic treatment with prednisone 1 mg/kg per day for 1 week with intralesional injection of microinjection of a solution of corticosteroid, methylprednisolone or triamcinolone 20 mg/mL final concentration; methotrexate or 5‐FU can be added. This can be a long‐term treatment of several months, which can be stopped and reinitiated according to progression. A new technique for nodules/granuloma treatment was developed in recent years as an option before surgery: the intraalesional laser treatment (ILT). This technique applied to the PCL filler showed benefit in the very few treated cases due to the extremely low AE incidence. The physico‐chemical properties of the PCL polymer, with a low melting point, should make it particularly sensitive to this technique. Nose is an area of high vascularity. Complications can be both intravascular and/or pressure effect leading to occlusion. Knowing the anatomy is of utmost importance. Deep injection at the level of the supraperiostem is recommended and a bolus not more than 0.2 mL at each site is recommended. On the dorsum, center of the nose injection is a safe way to avoid vascular compromise. Under correcting the area is another key factor for consideration. Early bumps if developed can be controlled with massage. In case of a delayed complication the above guidelines are of utmost importance while handling them. Surgical intervention may be needed if the problem persists. PCL Resorption Mechanism When thinking about biopolymers, it's crucial to remember that anything that is biodegradable does not always mean that it is bioresorbable; in other words, just because something breaks down and travels away from its site of action in vivo, it does not always mean that the body gets rid of it. On the other hand, the idea of bioresorbability refers to the complete removal of the original foreign materials and bulk breakdown products by‐products (low molecular weight molecules) with no lingering adverse effects . When water enters the microspheres, the ester linkages throughout the whole polymer matrix gradually hydrolyze, leading to a bulk degradation process that is characteristic of PCL‐filler degradation by hydrolysis. First, the mass, volume, and form of the implant stay constant over time, but the length and molecular weight of the polymer chain drop. Next, diffusion of the tiny polymer fragments occurs after hydrolysis has created low molecular weight chains. In the microspheres, PCL shows both amorphous and crystalline parts; the amorphous portions are more readily hydrolyzed than the crystalline regions. Longevity of the microspheres ultimately depends on the hydrolytic disintegration of PCL crystalline areas. The chain length (molecular weight) of the Ellansé range items is what sets them apart. The PCL monomers (6‐hydroxycaproic acid) undergo a series of reactions that end in the TCA cycle . PCL Mechanism of Action The two components of the filler are CMC and PCL. The PCl microspheres are suspended in a customized aqueous CMC gel carrier. The CMC gel leads to an immediate effect and the PCL leads to a sustained effect . The sustained effect is sustained by collagen production and the 3D‐scaffold formed, preventing cluster formation and prolonging the effect. Collagen, the most predominant protein in the human body and skin, plays a crucial role in the extracellular matrix (ECM), and skin changes. The PCL filler, a collagen stimulator, has been shown to increase collagen production in both animals and humans hence, inducing a tissue repair process through inflammation, proliferation, and remodeling . The long‐term efficacy and duration of action has been shown through clinical studies. It has been proven that a PCL‐based collagen stimulator can improve facial volume, forehead augmentation, and skin quality . The long‐term efficacy of the PCL filler is demonstrated in volume restoration, contour redefinition, skin rejuvenation, skin quality, and wrinkle reduction . Galadari et al. in their study showed the superiority of PCL fillers over HA fillers over the period of 12 months. A randomized, split faced study clearly concluded in favor of PCL‐based dermal fillers. PCL Safety Profile Clinical studies have shown that the PCL filler has shown no serious adverse events globally. No granuloma or vascular complications has been reported. However, like all other fillers there may be some injection related responses. These being mainly edema or ecchymosis, which are usually mild and resolved within a few days without intervention . In a European clinical study, injection site reactions were similar to any filler, with swelling and bruising disappearing in 2–4 days. The long‐term safety of the PCL filler was confirmed in a clinical study following safety up to 30 months . From 2009 to December 2017, 355 adverse events were reported, with a low adverse event rate of 0.056% . A review from launch to December 2020 found the adverse event rate to be 0.0462% or 1% in 2615 syringes . The overall safety profile of the PCL‐based collagen stimulator is good. However, there have been cases of late reactions, granuloma, discoloration, and xanthelasma‐like reactions. A review of complications in Korea showed good safety in 780 treated subjects from April 2015 to May 2018, with edema and bruising being the most common . Overall, the PCL filler has shown good long‐ term efficacy, long action duration, and safety in aesthetic treatment. Complication Prevention and Treatment: Recommendations Physicians must be aware of the product characteristics to avoid adverse events. An important consideration should be given to right patient selection, proper aseptic conditions, and the right injection technique . Preventing and managing complications of PCL fillers is essential for optimal aesthetic outcomes. Following strict rules and general recommendations is crucial for PCL‐based collagen stimulator treatment. Prevention measures have been discussed in various articles and expert reviews . Prevention of Adverse Events Pre‐procedural, post‐procedural, and post‐procedural care Proper selection of patients Thorough patient assessment, understanding of expectations, and aesthetic assessment Right product selection Strict adherence to Instructions for Use (IFU) Aseptic conditions to prevent infection Recognized and trained physicians should inject Knowing and focusing on anatomy, product characteristics, and injection techniques. Proper advice on post‐procedural care Management of Adverse Event: Recommendations Dermal fillers are a common treatment option for a variety of conditions, but severe adverse events are rare. To manage these effects, physicians must localize and recognize the culprit product. Knowing the anatomy and blood supply of any area to be injected is mandatory . It is necessary to rule out whether the PCL filler is causing the event or not. The injected product, whether PCL or HA filler or any other product should be considered Guilty until proven innocent. Recent addition of ultrasound in aesthetics, can help identify the filler, improve diagnosis, and guide treatment. The most common minor side effects are swelling/edema and the rare ones are nodules/lumps and granulomas. Christen et al. in their study have provided a comprehensive guideline to manage complications with PCL fillers. Swelling/edema is a normal inflammatory reaction to the trauma caused by the injection or large volume injected, which should disappear within 5–7 days. Prophylaxis with anti‐inflammatory enzymes, arnica/gelsemium, cold compresses, and anti‐inflammatory drugs are recommended for moderate cases. Persistent edema localized at the treated zone that lasts more than 7 days until 2 weeks is logically based on oral corticosteroids. For dermal fillers in general, long‐lasting malar edema responds poorly to treatment, and experts recommend prevention. Nodules should not be confused with granulomas, as they are noninflammatory and hard, localized at the injection sites, pea‐shaped, and not increasing in size. Treatment depends on the time of onset and a wait and see attitude is recommended. Nodules/lumps occurring early after injection are generally related to a technical error, and intralesional microinjection of corticosteroids is the standard treatment. Treatment generally needs to be repeated. Inflammatory nodules/granuloma are rare but severe adverse events that can occur 6–24 months after injection. They are a secondary late‐onset chronic inflammatory reaction with varying etiology, occurring 6–24 months after injection. Treatment is based on intralesional corticosteroids (high‐dose triamcinolone mixed with lidocaine and 5‐fluorouracil) to prevent recurrence and skin atrophy. Oral corticosteroids are often associated with recurrent granuloma, but surgical therapy is a last resort due to the difficulty of removing the granuloma completely and the risk of infection and scars. The PCL filler is not known to be associated with granuloma, but it is important to provide information on the treatment recommended by a group of experts. The treatment described in detail starts by systemic treatment with prednisone 1 mg/kg per day for 1 week with intralesional injection of microinjection of a solution of corticosteroid, methylprednisolone or triamcinolone 20 mg/mL final concentration; methotrexate or 5‐FU can be added. This can be a long‐term treatment of several months, which can be stopped and reinitiated according to progression. A new technique for nodules/granuloma treatment was developed in recent years as an option before surgery: the intraalesional laser treatment (ILT). This technique applied to the PCL filler showed benefit in the very few treated cases due to the extremely low AE incidence. The physico‐chemical properties of the PCL polymer, with a low melting point, should make it particularly sensitive to this technique. Nose is an area of high vascularity. Complications can be both intravascular and/or pressure effect leading to occlusion. Knowing the anatomy is of utmost importance. Deep injection at the level of the supraperiostem is recommended and a bolus not more than 0.2 mL at each site is recommended. On the dorsum, center of the nose injection is a safe way to avoid vascular compromise. Under correcting the area is another key factor for consideration. Early bumps if developed can be controlled with massage. In case of a delayed complication the above guidelines are of utmost importance while handling them. Surgical intervention may be needed if the problem persists. Conclusion PCL fillers hold a lot of promise because of their collagen stimulating effect that may result in long‐term sustained results. The use of PCL in nonsurgical rhinomodelation has not been studied. Expert injection techniques are of highest importance and knowledge of handling the adverse events if any are of utmost consequence. In this study a high satisfaction assessment score was achieved and there were no reported major complications. The study outcomes clearly confirm that injecting PCL filler into the deep planes can lead to an aesthetically pleasing results with high patient satisfaction. The study had the limitation of being an open label and had less patients but larger, randomized trials in future could lead to strengthening the current results. Though done for a follow‐up period of 9 months, based on the data available for PCL fillers we can conclude that the results will be long lasting. All authors contributed to data analysis, drafting or revising the article, gave final approval of the version to be published, and agree to be accountable for all aspects of the work. Two authors are consultants for Sinclair Pharma. Dr. Kamran Izhar Qureshi is a trainer for Sinclair Pharma and Dr. Franco Vercesi is the International Key Opinion Leader for Sinclair Pharma. Dr. Hina Farooq Qureshi has no association with Sinclair Pharma. The authors declare no conflicts of interest. |
Review of patterns in homicides by sharp force: one institution’s experience | 6d19da1a-23e9-42b6-b907-3ff4100e9df8 | 10752844 | Pathology[mh] | Sharp force injury fatalities are frequently encountered in forensic medicine. The manner of death in such cases is most frequently classified as homicide, followed by suicide and accident . Interestingly, a correlation can be seen between the gun control legislation existing in a specific country and the frequency of homicides involving sharp force. The stricter the gun laws and thus lower the availability of firearms, the higher the frequency of homicides by sharp force since sharp tools are the most easily available weapons. Taking the Czech Republic as an example of a country with rather strict gun control laws, the authors report that in their jurisdiction, sharp force fatalities represented 38.1% of all homicides between 2008 and 2020, making them the most frequent category of homicides, with firearm homicides merely accounting for 13.4%. This is in sharp contrast with the USA, where gun control laws are much more lenient, and firearm homicides accounted for 75% of all homicides . This is in line with the data reported for other European cities and regions with strict gun control laws. The incidence of homicides involving sharp force as reported for some jurisdictions is as follows: 27% in Oslo (Norway), 31% in Lisbon (Portugal), 32.7% in Brescia County (Italy), 33% in Copenhagen (Denmark), and 37% in Stockholm area (Sweden) . Given this context, this study is driven by the high frequency of such cases and their relevance for the community. The study discusses not only specific patterns of sharp force injuries in homicides, but also less frequently studied parameters such as the nature of the assault or perpetrators’ profiles. Special attention was paid to the evaluation of any accompanying blunt force injuries indicating the escalation of the assault prior to suffering the fatal sharp force injury or injuries. The autopsy files of the Department of Forensic Pathology of the Ostrava University Hospital, which provides autopsy services for the Moravian–Silesian Region with a population of approximately 1.19 million people, were searched for sharp force injury fatalities over a 13-year period from January 2008 to December 2020. Sharp force-related case files and autopsy reports were reviewed. The sex and age of the victims and perpetrators, the place of death, the nature of the assault, the type of sharp weapon used, the presence of clothes defects, the number and location of wounds, the presence and localization of defensive wounds, the cause of death, the toxicological findings of victims and perpetrators, and their relationship were summarized, and the corresponding statistical analysis was performed. For categorical variables, both absolute and relative frequencies are provided; for numerical variables, median values and ranges are provided. To analyze the data, the following statistical tests were used: the Mann–Whitney test, the Chi-Square, or the Fisher’s exact test. All statistical analyses were performed using the R software (version 4.0.2, www.r-project.org ), and the significance level was set to 0.05. Between 2008 and 2020, a total of 14,327 autopsies were performed at the Department of Forensic Pathology in Ostrava. Sharp force injury fatalities accounted for 167 cases including 91 cases of suicides, 71 cases of homicides, and 5 accidental cases. Of the 71 cases, 43 victims were male and 28 female; 5 of all the victims were younger than 18 years. The age of the victims ranged from 0 to 79 years, and the median age of male and female victims was 47 years and 43 years, respectively. The available evidence and police investigation indicate that 69 perpetrators were responsible for the total of 71 homicides. The lack of one-to-one correspondence is accounted for by the fact that in 4 cases, a single perpetrator killed two persons, and two acts were committed by 2 perpetrators. Of all the perpetrators, as many as 68 were caught, and only one remained unidentified. Of the identified perpetrators, a total of 72% were male and 28% were female; interestingly, only male perpetrators were responsible for the cases of double homicide. The median age of male and female perpetrators was 46 and 48 years, respectively. In 48% of the cases, the victim and the perpetrator were family members or partners; in 24%, they were friends; and in 28%, the perpetrator did not know the victim. Of the total of 71 homicides, 75% took place at home (houses, huts, etc.) and the rest, i.e., 25% of the homicides, took place at public areas including streets, pubs, or a swimming pool. Of the total of 71 victims, 73% die immediately after assault, while 27% received CPR. Of those who received CPR, 37% were transported to the emergency department, 26% underwent abdominal and thoracic surgery, and 16% underwent thoracentesis. In the majority of the cases (93%), the appearance of the wounds indicated the use of one or more knives, while only 7% of cases involved other sharp tools (a screwdriver, scissors, an axe). In 58% of the cases, the offending tool was found at the crime scene or in its proximity, and in mere 8% of cases, the offending tool was found to be stabbed in the victim’s body. In 82% of the cases, the victims were transported to the Department of Forensic Pathology in their clothing, while in 14% of the cases, the clothes were secured by the police at the crime scene or in the ambulance, and in only 4% of the cases, the clothes were not deliverable at all (in two cases, the clothes were destroyed by fire and one victim was murdered while taking shower; see Table ). Most of the homicide cases involved multiple sharp force injuries (79%), while a single sharp injury was found in 21% of the cases. The most frequently injured body area was the left portion of the chest (49%). Figure shows the percentages of cases in which the respective body areas were injured (Fig. ). The most common cause of death was hemorrhagic shock (33%), followed by heart, neck, and intrathoracic vessel injury as shown in Table . These results are in agreement with the published studies. This study aimed to determine the following: The number of cases where the fatal sharp force assault was preceded by a blunt force assault, and the presence of blunt force defensive wounds in such cases. The number of sharp force injuries sustained by the victims, including any defensive wounds, and the sex distribution thereof. Positive toxicological findings in both perpetrators and victims, and the sex distribution thereof. Blunt force assault This section analyzes the presence of injuries inflicted by kicks, punches, or a blunt tool as well as the presence of blunt force defensive wounds. Presence of blunt force Injuries located in the upper part of the skull or in central part of the face (i.e., orbits, nose, and lips) were attributed to blunt force assault. The injuries were caused by kicks, punches, or blunt tools such as a knife handle, barbell, hammer, and an axe butt. The data showed that blunt force assault preceded the sharp force assault in 27% of the cases as shown in Table . No significant difference in sex distribution was found. Presence of blunt force defensive wounds Injuries located in the dorsal part of the hand and ulnar side of the forearm were assessed as blunt force defensive wounds. The presence of such wounds was found in 23% of the cases (Table ). No significant difference in sex distribution was found. Presence of both blunt force and blunt force defensive wounds The presence of both blunt force and blunt force defensive wounds was found in 21% of the male victims and 11% of the female victims (Table ). Blunt force assault and the sex of the perpetrator Of note is the fact that the blunt force assault preceding the sharp force assault was only present in cases involving male perpetrators. Sharp force assault This section evaluates the number of sharp force injuries and the sex distribution, as well as the presence of sharp force defensive wounds. Type of wounds The majority of the cases involved incised and stab wounds, while slash wounds were identified only in 2 cases. Number of wounds The number of sharp force injuries ranged from 1 to 68. The victims were assigned to three groups according to the number of injuries as shown in Table . The data reveal a significant difference in the number of injuries in male and female victims, with female victims suffering a higher number of injuries. Sharp force defensive wounds Sharp force defensive wounds were present in 54% of the cases; unfortunately, they could not be assessed in 4% of the cases owing to extensive thermal damage; no such wounds were present in the rest of the cases (Table ). Figure shows the number of sharp force defensive wounds indicating a significant relationship between the number of injuries and defensive wounds. The defensive wounds were most often localized on the left hands and forearms (Fig. ). Limb penetration and partial finger amputation was present in only 8% of the cases. Toxicological findings in victims and perpetrators All victims and most of the perpetrators were tested for the presence of alcohol and other relevant substances. Toxicological findings in victims Blood alcohol testing was performed in 99% of the victims. Alcohol intoxication was found in 59% of the victims, and the degree of intoxication is shown in Table . A statistically significant difference was found between male and female victims, with male victims being intoxicated more frequently (Table ). Furthermore, cannabis intoxication was found in 1 case, and combined cannabis and methamphetamine intoxication was found in 1 case. Toxicological findings in perpetrators If possible, the perpetrators were also tested for the presence of alcohol. Unfortunately, the testing could not be performed in 24% of the cases. In cases where the perpetrator was arrested within short time after committing the crime, a retrospective calculation was performed to determine the blood alcohol level at the time of committing the crime (the applied rate of decline after the reaching the peak of the curve was 0.012–0.020%/h). The level of alcohol intoxication is shown in Table . No statistically significant difference was found between male and female perpetrators (Table ). In addition, three perpetrators were tested positive for cannabis, one for methamphetamine and one was both cannabis and toluene. This section analyzes the presence of injuries inflicted by kicks, punches, or a blunt tool as well as the presence of blunt force defensive wounds. Presence of blunt force Injuries located in the upper part of the skull or in central part of the face (i.e., orbits, nose, and lips) were attributed to blunt force assault. The injuries were caused by kicks, punches, or blunt tools such as a knife handle, barbell, hammer, and an axe butt. The data showed that blunt force assault preceded the sharp force assault in 27% of the cases as shown in Table . No significant difference in sex distribution was found. Presence of blunt force defensive wounds Injuries located in the dorsal part of the hand and ulnar side of the forearm were assessed as blunt force defensive wounds. The presence of such wounds was found in 23% of the cases (Table ). No significant difference in sex distribution was found. Presence of both blunt force and blunt force defensive wounds The presence of both blunt force and blunt force defensive wounds was found in 21% of the male victims and 11% of the female victims (Table ). Blunt force assault and the sex of the perpetrator Of note is the fact that the blunt force assault preceding the sharp force assault was only present in cases involving male perpetrators. Injuries located in the upper part of the skull or in central part of the face (i.e., orbits, nose, and lips) were attributed to blunt force assault. The injuries were caused by kicks, punches, or blunt tools such as a knife handle, barbell, hammer, and an axe butt. The data showed that blunt force assault preceded the sharp force assault in 27% of the cases as shown in Table . No significant difference in sex distribution was found. Injuries located in the dorsal part of the hand and ulnar side of the forearm were assessed as blunt force defensive wounds. The presence of such wounds was found in 23% of the cases (Table ). No significant difference in sex distribution was found. The presence of both blunt force and blunt force defensive wounds was found in 21% of the male victims and 11% of the female victims (Table ). Of note is the fact that the blunt force assault preceding the sharp force assault was only present in cases involving male perpetrators. This section evaluates the number of sharp force injuries and the sex distribution, as well as the presence of sharp force defensive wounds. Type of wounds The majority of the cases involved incised and stab wounds, while slash wounds were identified only in 2 cases. Number of wounds The number of sharp force injuries ranged from 1 to 68. The victims were assigned to three groups according to the number of injuries as shown in Table . The data reveal a significant difference in the number of injuries in male and female victims, with female victims suffering a higher number of injuries. Sharp force defensive wounds Sharp force defensive wounds were present in 54% of the cases; unfortunately, they could not be assessed in 4% of the cases owing to extensive thermal damage; no such wounds were present in the rest of the cases (Table ). Figure shows the number of sharp force defensive wounds indicating a significant relationship between the number of injuries and defensive wounds. The defensive wounds were most often localized on the left hands and forearms (Fig. ). Limb penetration and partial finger amputation was present in only 8% of the cases. The majority of the cases involved incised and stab wounds, while slash wounds were identified only in 2 cases. The number of sharp force injuries ranged from 1 to 68. The victims were assigned to three groups according to the number of injuries as shown in Table . The data reveal a significant difference in the number of injuries in male and female victims, with female victims suffering a higher number of injuries. Sharp force defensive wounds were present in 54% of the cases; unfortunately, they could not be assessed in 4% of the cases owing to extensive thermal damage; no such wounds were present in the rest of the cases (Table ). Figure shows the number of sharp force defensive wounds indicating a significant relationship between the number of injuries and defensive wounds. The defensive wounds were most often localized on the left hands and forearms (Fig. ). Limb penetration and partial finger amputation was present in only 8% of the cases. All victims and most of the perpetrators were tested for the presence of alcohol and other relevant substances. Toxicological findings in victims Blood alcohol testing was performed in 99% of the victims. Alcohol intoxication was found in 59% of the victims, and the degree of intoxication is shown in Table . A statistically significant difference was found between male and female victims, with male victims being intoxicated more frequently (Table ). Furthermore, cannabis intoxication was found in 1 case, and combined cannabis and methamphetamine intoxication was found in 1 case. Toxicological findings in perpetrators If possible, the perpetrators were also tested for the presence of alcohol. Unfortunately, the testing could not be performed in 24% of the cases. In cases where the perpetrator was arrested within short time after committing the crime, a retrospective calculation was performed to determine the blood alcohol level at the time of committing the crime (the applied rate of decline after the reaching the peak of the curve was 0.012–0.020%/h). The level of alcohol intoxication is shown in Table . No statistically significant difference was found between male and female perpetrators (Table ). In addition, three perpetrators were tested positive for cannabis, one for methamphetamine and one was both cannabis and toluene. Blood alcohol testing was performed in 99% of the victims. Alcohol intoxication was found in 59% of the victims, and the degree of intoxication is shown in Table . A statistically significant difference was found between male and female victims, with male victims being intoxicated more frequently (Table ). Furthermore, cannabis intoxication was found in 1 case, and combined cannabis and methamphetamine intoxication was found in 1 case. If possible, the perpetrators were also tested for the presence of alcohol. Unfortunately, the testing could not be performed in 24% of the cases. In cases where the perpetrator was arrested within short time after committing the crime, a retrospective calculation was performed to determine the blood alcohol level at the time of committing the crime (the applied rate of decline after the reaching the peak of the curve was 0.012–0.020%/h). The level of alcohol intoxication is shown in Table . No statistically significant difference was found between male and female perpetrators (Table ). In addition, three perpetrators were tested positive for cannabis, one for methamphetamine and one was both cannabis and toluene. In general, the results of this study, such as the number of male victims being higher than female victims, are consistent with available studies dealing with sharp force injury fatalities . Since this study aims to be comprehensive and include as many factors as possible, it proved to be rather challenging to find similar studies for other jurisdictions with data for all criteria reported. Therefore, this section will discuss the results and compare them with other published studies, as far as data availability permits. Unlike in other studies where the percentage of female victims ranged from 30 to 35% , it was slightly higher in this study, namely 39%. In general, the male to female ratio in the published studies varies from 2 to 5, with the exception of the study by Belghith et al. involving Aboriginal population in Central Australia where female victims accounted for 53% of all the victims . Such high number of female victims in Aborigines population could be explained by the use of traditional punishment, which is still practiced in Central Australia . While the age of the victims in developing countries usually ranges from 20 to 30 years, data from developed countries show a significantly higher average age of the victims . This difference could be accounted for by different socioeconomic situations, differences in lifestyle, and different life expectancy in these countries. As reported in the results, 73% of the victims die immediately after assault owing to the extent and lethal nature of the sustained injuries. Only 10% of all the victims were admitted to hospital. The victims who stayed alive long enough to be transported to hospital sustained penetrating stab wounds with no severe internal organ damage. For illustrative purpose, the typical case scenario can be summarized as follows: A male perpetrator assaulting his victim at home using a knife. The victim and the perpetrator were related or knew each other. This scenario is also consistent with the results of other studies from Europe and Japan . Of note is the fact that in Tunisia, sharp force homicides were most commonly committed in public places (62.4%) which is in contrast with both the majority of the published studies and with the study presented in this paper . Published studies report that the offending weapon was found at the crime scene or in the proximity of the crime scene in 49–53%. In our cohort, the frequency of the presence of the weapon at the crime scene was slightly higher (58%), and in 8% of the cases, the weapon was found to be stabbed in the victim’s body. This finding slightly differs from the studies by Terranova or by Thomsen et al. who report the offending weapon stabbed in the victim’s body in 3.3% and 4.5%, respectively . The frequency of clothes defects in cases of sharp force homicides as reported ranges from 71 to 89% . In this study, clothes defects were seen in 83% of the cases; the more wounds suffered, the more frequent the clothes damage. In cases of single wounds, clothes defects were found in 83% of the cases, which is consistent with the findings reported by Burke et al. who reported clothes defects in 85% of their cases of single stab wounds . The majority of the cases involved multiple incised and/or stab wounds, while slash wounds were found only rarely, which corresponds to results of the published studies . This is not surprising given that a household is the most common place of assault in developed countries, where an easy knife availability can be assumed. A single stab wound was found in 21% of the cases. The published data regarding single stab wounds vary widely; Vassalini et al. found a single stab wound only in 9.8% of their cases, while Thomsen et al. report the presence of a single stab in 18.9% of their cases . In the presented cohort, a single stab wound was found predominantly in male victims, while in female victims, a single stab wound was found rarely. Previous studies have identified the thorax/neck to be the most common site of sharp force injuries suffered in homicides, with the heart and the large vessels being the most frequently injured organs, which is in agreement with the findings of the present study . Such findings could be well explained by the fact that the victim and the assailant were facing each other at the moment of the assault, and possibly the assailant’s general knowledge of human anatomy and the position of the vital organs and vessels. The study also aimed to determine whether the sharp force assault was preceded by blunt force assault. Such use of the blunt force may be expected in cases of conflict escalation when the perpetrator believes to be stronger than the victim. Such assaults were committed only by male perpetrators against both male and female victims, with little difference across the sexes. To be precise, 26% of male victims and 29% of female victims suffered such injuries before the fatal sharp force assault, which is contrary to what one would expect, i.e., higher incidence of such injuries in female victims due to their physical condition. The frequency of blunt force defensive wounds did not show a significant difference across the sexes. This corresponds well with the findings by Rogde et al. who observed that the incidence of additional violence does not differ between the sexes . With respect to sharp force assault, there was a statistically significant difference in the number of sharp force injuries in male and female victims, with male victims suffering fewer injuries, including defensive wounds. Such observation is consistent with other published studies . The sharp force defensive wounds were found in 54% of all cases. Previous studies have shown the presence of such defensive wounds in 31–64% of the cases of homicidal deaths from sharp weapon injury . The percentage increases if the number of stab wounds is higher. In our cohort, the presence of defensive wounds was found in 85% of the victims who suffered more than 10 stab wounds. Similarly, Thomsen et al. report the presence of defensive wounds in 76.2% of the cases where the victim suffered more than 10 stab wounds . In the studied cases, the presence of defensive wounds was found in 49% of male victims and in 61% of female victims. The difference in frequency of the presence of defensive wounds between male and female victims may possibly be explained not only by the number of the suffered wounds but also by higher blood alcohol level in male victims, which could compromise both their self-defense as well as motor coordination. The most frequent sites of defensive wounds included fingers, palms, and the dorsal side of the hands, with the dominance of left hand. This conclusion is in contrast with published studies which report the palmar site of the right hand to be the most frequently injured part of the body during defense . There was a comparatively high incidence of injuries in left arm and shoulder, i.e., in sites adjacent to the left side of the thorax, which may be explained by the fact that such injuries may either qualify as defensive wounds or primary injuries caused by sharp force assault against the torso, which makes their interpretation challenging. The results showed that the higher the number of sharp force injuries, the higher the number of defensive wounds. Defensive wounds were present in all victims who suffered more than 10 sharp force injuries and died due to hemorrhagic shock (i.e., there was no injury directly causing the death). A special case was a victim who suffered 72 sharp force injuries predominantly in the area of the neck and upper part of the thorax, but no defensive wounds were present. This case involved a brutal assault, and it can be assumed that some of the initial injuries were fatal, and thus prevented the victim from defending herself. Finally, the study also assessed the intoxication of both victims and perpetrators. In terms of alcohol intoxication, zero blood alcohol level was found in 67% of female victims and only in 21% of male victims. This is in agreement with results from Scandinavia published by Rogde et al. who found that 35% of the male and 63% of the female victims had zero blood alcohol level . In most cases, the blood alcohol level was within the level of severe and life-threatening alcohol intoxication (BAC, i.e., > 0.31%). This is in contrast to the findings by Belghith et al. who report the frequency of alcohol-intoxicated victims in Tunisia of only 4.2% . These difference may be explained by higher alcohol consumption by Europeans compared to other countries, especially Islamic countries . Intoxication involving other substances was rather rare. The perpetrators were tested for their blood alcohol level only in 76% of the cases, and severe or life-threatening intoxication was found in approximately one-third of male and female perpetrators. Intoxication involving other substances was negligible. In many aspects, this study has confirmed what has been published on sharp force homicides earlier. What differs is the higher incidence of female victims as well as higher age of both male and female victims. Furthermore, the number of sharp force injuries suffered by female victims was significantly higher than that suffered by male victims. No statistically significant difference was found between the number of blunt force injuries suffered by male and female victims before the fatal sharp force assault. Such injuries were present in approximately 25% of the victims, and the cases usually involved a male perpetrator. Contrary to published studies, a higher incidence of severe or life-threatening alcohol intoxication in male victims was found. In conclusion, this retrospective study shows that while many aspects of sharp force homicides tend to be universal, there may be slight differences in results when different countries are contrasted and files of different departments and time periods are analyzed. When such differences exist (e.g., the age of the victims), they may sometimes be explained by the social and economic situation in the respective countries and regions. Homicides by sharp force are the most frequent type of homicides in the Czech Republic. The presence of both blunt force and sharp force trauma may be indicative of assault escalation. Alcohol intoxication is more frequent in male victims and male perpetrators. The sex and age of the victims may be related to the socioeconomic situation of the country. |
The physical performance of workers on offshore wind energy platforms: is pre-employment fitness testing necessary and fair? | 698bae53-4bbb-46bb-9abe-ee4966131a8a | 6435631 | Preventive Medicine[mh] | The last 2 decades have brought major technological advances and a significant increase in the industrial use of renewable energy sources. The wind energy sector, for example, has seen particular growth in Europe and China (GWO ). This evolution requires an appropriate assessment of the new and changing challenges to health and safety at work. In 2013, EU-OSHA ascertained significant skill gaps in this workforce. Many of the current recommendations on risk assessment, accident prevention, and physical requirements for work have simply been borrowed from related industries, such as the offshore oil and gas industry. Currently, only few standardized training programs specific to the offshore wind energy industry are available (EU-OSHA ). The offshore workplace is dangerous. Employees must be able to perform heavy manual labor, including windlass work and frequent climbing of ladders and stairs (e.g., for 30 min continuous, usually several times a day). Part of the work must be performed at great heights and under often rapidly changing weather conditions. Exposure to multiple physical stressors, including extreme temperatures, continuous noise and vibrations, and a decrease in sleep quality are generally unavoidable (DGAUM ; Velasco Garrido , ). These physical stressors are further compounded by the psychological pressures of being on strenuous 12 h, 14 days on/14 days off work shifts, with limited privacy in the often cramped, shared living quarters, and long absences from home (Parkes ; Mette et al. ). The consequences of such potentially high-risk situations can be grave and have led to the publication of multiple guidelines to ensure workers’ safety, not only through thorough risk assessment of the workplace, but also through evaluation of the physical and psychological suitability of each employee. A concept of fitness to work has to be built around the central question: “Can this person do the assigned tasks safely and repeatedly without foreseeable risk to their health and safety or that of their colleagues, third parties and company assets?” (IPIECA and IOGP ). Evaluations must consider “direct risks” (i.e., those of the employees and the working conditions themselves) and “indirect risks” (i.e., those that arise due to logistical challenges). The employee must be both “fit for task” and “fit for location” (IPIECA and IOGP ). This is especially true for the offshore environment, where medical facilities are often very limited and first-aid measures are the responsibility of the colleagues. In such circumstances, physical restrictions can result in dangerous situations, not only for the injured party, but also for the colleagues and the installations. The purpose of aptitude testing, therefore, should be twofold: it should ensure that employees are able to cope with extreme loads (e.g., ladder climbing and boat landing), as well as manage dangerous situations safely (e.g., rescue by colleagues, firefighting). In this way, danger to the employees themselves, their colleagues, and the platforms should be minimized. Despite the clear benefit aptitude testing can provide in terms of risk reduction, it also necessarily leads to the exclusion of employees from certain forms of work. This, combined with the lack of evidence or empirical data supporting the validity of the methods used to make assessments of fitness to work, is problematic (Serra et al. ; de Kort et al. ; de Kort and van Dijk ). Over the past few years, various European guidelines for aptitude testing for work in the offshore environment have been published. Germany, Norway, the UK, and the Netherlands, for example, have each produced a set of preventive measures, including medical or health standards, that employees are to meet before they can be cleared for work offshore (DGAUM/AWMF ; NOGEPA ; Norwegian Directorate of Health ; renewable UK ; Taylor ). This type of aptitude testing is also seen in other occupational fields, such as firefighting, the military, and law enforcement (Hauschild et al. ). Often, assessments of physical fitness are performed using cardiopulmonary exercise testing by cycle ergometry or with the Chester Step Test (Preisser et al. ). Ensuring the highest level of safety for the employees and their work environment, while also avoiding unnecessary or unfair exclusion, should be a major objective of fitness to work guidelines. It was our goal, therefore, to gain insight into the actual level of physical strain individual employees in the offshore wind energy industry are subjected to during their regular tasks. Direct on-site (offshore) measurements, however, are technically, organizationally, and legally complex. We, therefore, alternatively chose the mandatory (onshore) Global Wind Organization (GWO) safety training modules for our investigation of individual heart rate (HR) and oxygen consumption ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 ) levels. Although carried out onshore without the added burden of the aforementioned physical and environmental stressors, the practical exercises performed in these sessions are characteristic of the offshore workplace. Furthermore, prior to their first offshore shift and at subsequent regular intervals, every employee is required to complete these trainings. With this study, we want to verify whether the performance of each participant during the safety trainings is comparable to the individual maximum performance achieved during cycle ergometry, and, as a result, whether this form of exercise testing is a justified and fair aptitude test for work on offshore wind platforms. Recommendations for pre-existing conditions and mental health are also included in the fitness to work guidelines and, although important, they are beyond the scope of this article. The results are presented within the context of the requirements put forth by the guidelines of the German Society for Occupational and Environmental Medicine (DGAUM), published via the Association of the Scientific Medical Societies in Germany (AWMF) (DGAUM ), and the British Organisation renewableUK, which released a subsequent guideline specific to the wind energy sector and its risks (renewable UK ; Preisser et al. ). During the safety training modules, we were able to recruit 29 participants for our study, only 1 of whom was female. Due to the differing gender reference values for performance, etc., her data were not included in further analyses. Furthermore, because of various organizational circumstances, we were unable to perform cycle cardiopulmonary exercise testing (CPX) or cycle ergometry on all subjects, resulting in a final collective of 23 male subjects. The measurements were taken during the GWO-specified modules such as Working at Heights, Sea Survival, and Fire Awareness, as these reflect the requirements of the regular offshore work most accurately. Measurements were taken from September to November 2016 at the OffTEC Base GmbH & Co. KG in Enge-Sande, Schleswig–Holstein, Germany. Types of training activities and weather conditions of the outdoor and simulated sea survival modules were recorded. Participation was done on a volunteer basis; no pre-selection on the part of the investigators was performed. At the beginning of each of the training modules, participants were informed of the study purpose and objectives, and written consent was obtained. Prior to exercise testing and field measurements, participants were required to complete a thorough questionnaire concerning their medical history, to detect any prior illnesses and/or risk factors that would have led to exclusion from the study. Spirometry and CPX testing were completed by 16 of the participants, while 7 underwent cycle ergometry without pulmonary data. The forced 1-s and vital capacities (FEV 1 , FVC, and FEV 1 /FVC values) of each consenting individual were determined with spirometry, according to current guidelines (Pellegrino et al. ; Criée et al. ). CPX was performed using a cycle ergometer (Speedbike S10.9, Sportsline, Germany) and in accordance with current recommendations (Meyer et al. ). During testing \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 , carbon dioxide production ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{\text{C}}{{\text{O}}_2}$$\end{document} V ˙ C O 2 ), oxygen saturation (pulse oximetry, SpO 2 ), and HR were measured continuously with a pulse belt (Oxycon Mobile by JAEGER ™ /CareFusion, Hoechberg, Germany). Prior to each testing period, the equipment was volume and gas calibrated. A previously defined continuous step protocol was used in all cases: following an initial 1-min period of rest and a 1-min warm-up at 75 W (W), the load was increased by 25 W per minute, until the subjects could no longer maintain the required crank frequency of approximately 60 rpm. The determined ventilatory threshold (VT) corresponds to VT 1 , the point at which blood lactate begins to accumulate and breath frequency increases, in an effort to blow off the higher levels of CO 2 being produced to buffer acid metabolites. It can be calculated using the V-slope method, i.e., the first disproportionate increase in \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{\text{C}}{{\text{O}}_2}$$\end{document} V ˙ C O 2 relative to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 (Schneider et al. ; Westhoff et al. ). Because offshore employees participate in a thorough physical examination prior to their start of employment, many of the subjects had already performed cycle ergometry. For those subjects who did not undergo CPX “on site”, written consent was obtained to gain access to these test results. Although there were slight variations among the selected protocols, the testing conditions were similar to ours (room temperature, time of day, etc.). In this manner, four additional datasets were obtained for a total of 23 men with exercise testing. Field HR measurements during the various training modules were taken using HR monitor watch-belt systems (T31 coded transmitter, Polar Electro, Buettelborn, Germany). The activity of each individual was logged for later analysis. A minimum activity period of 2 min was set to account for potential delays in change of HR or recording by the equipment. Long periods of rest (e.g., during instruction, lunch breaks) were not included in the analysis. For HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 measurements, both absolute values and values relative to each individual’s maximum (%HR max and % \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max ) were calculated and, where possible, values at the VT (%HR VT and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{\% }}\dot {V}{{\text{O}}_{2,{\text{VT}}}}$$\end{document} \% V ˙ O 2 , VT ). HR work was defined as the difference between the heart rate measured during the trainings and HR rest (Sammito et al. ). So-called ‘reserve values’ (i.e., the difference between maximal and minimal measurements) for HR (%HR R ) and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\text{\% }}\dot {V}{{\text{O}}_{2,{\text{R}}}}$$\end{document} \% V ˙ O 2 , R ) were defined by the following equations: (HR work )/(HR max − HR rest ) × 100% and ( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,{\text{training}}}}$$\end{document} V ˙ O 2 , training − \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,{\text{rest}}}}$$\end{document} V ˙ O 2 , rest )/( \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max − \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,{\text{rest}}}}$$\end{document} V ˙ O 2 , rest ) × 100%, respectively. HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 at rest were taken as the minimum HR during field measurements (including rest periods) or as the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 preceding CPX, respectively. The results are presented as mean and range. Due to logistical factors, direct oxygen consumption measurements during field exercises were not possible (time constraints, interference with personal protective equipment (PPE), etc.). As a result, linear regression equations from on-site CPX testing between HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 were obtained; the correlation coefficients ( R ) and corresponding p values between the individual values of HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 were calculated for each of the 16 participants. An average R value was then calculated and presented as mean and range. The average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 during the training modules was interpolated from there (Preisser et al. ; Swain et al. ). Furthermore, the correlation coefficient ( R ) between the P max and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max values of the 16 subjects was calculated. Because of the small observation sample size of the different training modules, comparative tests (e.g., t test) were not performed. The participants provided their written informed consent to participate in this study. The study was approved by the Ethics Committee of the Hamburg Medical Association (register number PV5318). Study population characteristics The majority of the 23 male participants worked in the area of maintenance and repair, although the sample also included people who spend little time offshore, but nevertheless must also complete safety training and undergo aptitude testing. We further included two trainers of the offshore training modules. Due to the young age of the offshore wind energy sector, concrete data on the current working population are rare (Velasco Garrido ; BWE ; Kubsova and Felchner ). Our collective, however, appears to be similar in age and sex for employees in the offshore wind industry in Germany (Table ). The group was generally relatively young, with a mean age of 35 years (range 19–68). The mean BMI was 25 kg/m 2 (range 19.3–33), putting our cohort on the boundary between normal and pre-obesity (Table ). None of the subjects reported cardiovascular disease in the history and none were under the influence of HR-modifying medications. At the time of study, 30% were active smokers (8.7% < 10/day, 13% 10–20/day, 8.7% > 20/day). All participants had normal spirometry values (data not shown). Specifics of the training modules The training modules such as Working at Heights and Sea Survival each spanned a period of at least 1 day, beginning at approximately 8 a.m. and finishing at approximately 4 p.m., with an hour lunch break and multiple shorter breaks in between. Fire Awareness took place in the afternoons and was approximately 4 h in duration. Working at Heights was performed on two consecutive days. On the first day, for example, the participants were required to carry out rescue situations in wind turbines, using the appropriate safety and rescue devices, and anchor points. They also needed to demonstrate correct behavior on ladders while wearing PPE. On the second day, evacuation exercises from a mock turbine (height 18 m) in full PPE were performed. In addition, the participants discussed and practised strategies to minimize suspension trauma (GWO ). The observed Sea Survival units consisted of safe transfer exercises from vessel to dock and vessel to foundation, demonstration of individual and collective survival techniques, and rescuing and first aid of a “man overboard” (GWO ). In the Fire Awareness module, participants were asked to demonstrate knowledge of behavior in case of fire, as well as the proper practical application of fire extinguishing equipment (GWO ). Activities of modules that were carried out outdoors (i.e., Fire Awareness and Working at Heights) were done at temperatures ranging from 5 to 13 °C, with clear skies and windspeeds no greater than 17 km/h on any given day. Results of cycle ergometry and CPX testing The average max power or load ( P max ) for all 23 participants was 242.4 W (range 175–300), or 2.9 W/kg bodyweight (range 1.8–4.0) (Table ). For the 16 participants who underwent CPX, the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 and determined VT are depicted in Table . A positive correlation coefficient ( R ) of 0.79 was observed between the values of P max and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . In this study, HR max showed a weak negative correlation with age ( R = − 0.29); however, the mean HR max and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max values of the groups were close or equal to the age-predicted values (90.4% (range 71.2–104.9%, n = 23) and 100.1% (range 63.8–132.8%, n = 16), respectively) (Sammito et al. ; Gläser et al. ), see Table . Furthermore, across all measurements, a strong positive linear relationship was observed between HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 values during CPX testing (average R = 0.94, range 0.86–0.98, all p values < 0.001). This, along with evidence from the literature, justified using the thus derived linear regression equations to calculate \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 values during training (Preisser et al. ; Swain et al. ). It was also determined that the average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 value at the VT was 69.9% of the average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max (range 63.8–90.8%, Table ). In other words, participants who exceeded their individual VT values during training were working at or above approximately 70% of their \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . The average weight-adjusted maximum oxygen consumption was 34.5 ml/kg/min (range 18.1–50.7 ml/kg/min), calculated for comparability with the renewableUK guidelines. The loads achieved at a HR of 150 bpm were calculated according to the German guideline (Table ). HR field measurements The results from the HR field measurements from the 23 subjects can be seen in Table , grouped according to training module. The varying population sizes are a result of the fact that some study subjects completed multiple training sessions over the course of our investigation (i.e., participation in both Working at Heights and Fire Awareness, Table ). The maximum absolute value of 205 bpm was observed during Working at Heights (ladder climbing). The highest average HR during the trainings was observed in the Fire Awareness group (113.2 bpm, range 78–154) and the lowest in Sea Survival (105 bpm, range 85–169). In all training modules, the participants were on average working at roughly 65% their HR max . Furthermore, the groups did not differ significantly with respect to values relative to their corresponding maxima (%HR max ) and those at the VT (%HR VT ). Participants across all training modules achieved average HR work levels that corresponded to approximately 40% HR R , see Table . Oxygen consumption during training As described above, we found a strong correlation between HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 , which allowed us to calculate oxygen consumption during the trainings as derived from the measured HR values. The average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 during Fire Awareness, Working at Heights, and Sea Survival modules was 1404.5 ml/min (18.4 ml/min/kg), 1284.8 ml/min (14.9 ml/min/kg), and 923.6 ml/min (10.5 ml/min/kg), respectively (Table ). Participants of the Fire Awareness training reached average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 levels of approximately 77% and 48% of their respective average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 values at the VT and relative to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . For those who took part in Working at Heights and Sea Survival, values of approximately 50% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 at the VT and 35% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max were reached (Table ). These differences were not significant. The majority of the 23 male participants worked in the area of maintenance and repair, although the sample also included people who spend little time offshore, but nevertheless must also complete safety training and undergo aptitude testing. We further included two trainers of the offshore training modules. Due to the young age of the offshore wind energy sector, concrete data on the current working population are rare (Velasco Garrido ; BWE ; Kubsova and Felchner ). Our collective, however, appears to be similar in age and sex for employees in the offshore wind industry in Germany (Table ). The group was generally relatively young, with a mean age of 35 years (range 19–68). The mean BMI was 25 kg/m 2 (range 19.3–33), putting our cohort on the boundary between normal and pre-obesity (Table ). None of the subjects reported cardiovascular disease in the history and none were under the influence of HR-modifying medications. At the time of study, 30% were active smokers (8.7% < 10/day, 13% 10–20/day, 8.7% > 20/day). All participants had normal spirometry values (data not shown). The training modules such as Working at Heights and Sea Survival each spanned a period of at least 1 day, beginning at approximately 8 a.m. and finishing at approximately 4 p.m., with an hour lunch break and multiple shorter breaks in between. Fire Awareness took place in the afternoons and was approximately 4 h in duration. Working at Heights was performed on two consecutive days. On the first day, for example, the participants were required to carry out rescue situations in wind turbines, using the appropriate safety and rescue devices, and anchor points. They also needed to demonstrate correct behavior on ladders while wearing PPE. On the second day, evacuation exercises from a mock turbine (height 18 m) in full PPE were performed. In addition, the participants discussed and practised strategies to minimize suspension trauma (GWO ). The observed Sea Survival units consisted of safe transfer exercises from vessel to dock and vessel to foundation, demonstration of individual and collective survival techniques, and rescuing and first aid of a “man overboard” (GWO ). In the Fire Awareness module, participants were asked to demonstrate knowledge of behavior in case of fire, as well as the proper practical application of fire extinguishing equipment (GWO ). Activities of modules that were carried out outdoors (i.e., Fire Awareness and Working at Heights) were done at temperatures ranging from 5 to 13 °C, with clear skies and windspeeds no greater than 17 km/h on any given day. The average max power or load ( P max ) for all 23 participants was 242.4 W (range 175–300), or 2.9 W/kg bodyweight (range 1.8–4.0) (Table ). For the 16 participants who underwent CPX, the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 and determined VT are depicted in Table . A positive correlation coefficient ( R ) of 0.79 was observed between the values of P max and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . In this study, HR max showed a weak negative correlation with age ( R = − 0.29); however, the mean HR max and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max values of the groups were close or equal to the age-predicted values (90.4% (range 71.2–104.9%, n = 23) and 100.1% (range 63.8–132.8%, n = 16), respectively) (Sammito et al. ; Gläser et al. ), see Table . Furthermore, across all measurements, a strong positive linear relationship was observed between HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 values during CPX testing (average R = 0.94, range 0.86–0.98, all p values < 0.001). This, along with evidence from the literature, justified using the thus derived linear regression equations to calculate \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 values during training (Preisser et al. ; Swain et al. ). It was also determined that the average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 value at the VT was 69.9% of the average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max (range 63.8–90.8%, Table ). In other words, participants who exceeded their individual VT values during training were working at or above approximately 70% of their \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . The average weight-adjusted maximum oxygen consumption was 34.5 ml/kg/min (range 18.1–50.7 ml/kg/min), calculated for comparability with the renewableUK guidelines. The loads achieved at a HR of 150 bpm were calculated according to the German guideline (Table ). The results from the HR field measurements from the 23 subjects can be seen in Table , grouped according to training module. The varying population sizes are a result of the fact that some study subjects completed multiple training sessions over the course of our investigation (i.e., participation in both Working at Heights and Fire Awareness, Table ). The maximum absolute value of 205 bpm was observed during Working at Heights (ladder climbing). The highest average HR during the trainings was observed in the Fire Awareness group (113.2 bpm, range 78–154) and the lowest in Sea Survival (105 bpm, range 85–169). In all training modules, the participants were on average working at roughly 65% their HR max . Furthermore, the groups did not differ significantly with respect to values relative to their corresponding maxima (%HR max ) and those at the VT (%HR VT ). Participants across all training modules achieved average HR work levels that corresponded to approximately 40% HR R , see Table . As described above, we found a strong correlation between HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 , which allowed us to calculate oxygen consumption during the trainings as derived from the measured HR values. The average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 during Fire Awareness, Working at Heights, and Sea Survival modules was 1404.5 ml/min (18.4 ml/min/kg), 1284.8 ml/min (14.9 ml/min/kg), and 923.6 ml/min (10.5 ml/min/kg), respectively (Table ). Participants of the Fire Awareness training reached average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 levels of approximately 77% and 48% of their respective average \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 values at the VT and relative to \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . For those who took part in Working at Heights and Sea Survival, values of approximately 50% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 at the VT and 35% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max were reached (Table ). These differences were not significant. Physical aptitude testing is well established in a variety of physically demanding professions and is thought to represent an important aspect of occupational health and safety. While the benefits of such a practice may seem readily understandable, the general lack of empirical evidence presents a major problem for the field of occupational medicine. Although developed with employees’ best interest in mind, such preventative measures must also strive to avoid the unjust exclusion of people from certain fields of work, such as the offshore wind industry. Here, we provide a first investigation of the physical strain employees are exposed to in this sector. Because it is difficult to conduct on-site workload surveys at offshore workplaces, we examined individuals during compulsory safety training. The study group The average young age of our study group (35) is similar to that which has been observed for the offshore wind sector. As of 2012, roughly 65% of offshore employees worldwide were under 40 years of age, although there remained a small “core” of experienced workers (e.g., from other similar industries such as oil and gas) over the age of 51 (Willis ). Our final collective, consisting of 23 men (data from the 1 female subject were not included in statistical analyses), also reflects the gender distribution in the offshore wind energy industry. In Germany, approximately 19,000 people are employed in the offshore wind energy sector (FMEAE ). As of 2015, however, women made up not even 10% of all workers (Kubsova and Felchner ). The average BMI of 25 kg/m 2 places our study sample on the boundary between normal weight and pre-obesity. BMI, however, does not distinguish between muscles, bones, fat mass, and level of physical fitness. Nevertheless, many individuals in our cohort had a low–normal fitness level (based on CPX and predicted maximum values), suggestive of a lack of training. On average, however, our cohort was within the acceptable range of physical fitness for their age and sex categories, with a mean maximal load of 2.9 W/kg (Table ). In the literature, expected maximal values of 2.7 (± 0.4) W/kg are given for men between the ages of 30 and 39 (Prokop and Bachl ). Furthermore, our group collectively achieved a mean \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max that was 100.1% of the predicted average maximal value adjusted for age, sex, height, and weight (Gläser et al. ). It should be noted, however, that predicted maximal values depend heavily on the particular equation used. Calculations based on another formula recommended by Hansen et al. , for example, resulted in a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max value that was only 93.5% of predicted. In any case, our cohort appeared to generally be at or marginally below the predicted values for both HR max and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . When considered individually, many of the participants achieved values that were in fact well below the expected (Table ). Offshore work as a form of heavy physical labor Offshore work in the wind energy industry is said to be physically taxing (DGAUM ; Parkes ), this has, however, not yet been critically reviewed. Based on HR analyses, our study shows that 65% of the participants achieved average HR work values that exceeded 30% of their HR R , a parameter which characterizes hard work, as described below. Furthermore, the mean HR during all trainings was approximately that of the mean HR at 35% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max (110.7 bpm). It can, therefore, be assumed that our subjects performed work at a level approximately 35% of their \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . According to the literature, “limit” values for acceptable levels of strain at work are anywhere between 33 and 50% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max for an 8-h shift (Wilson and Corlett ; Evans et al. ; Åstrand et al. ), depending on the number and length of rest periods built into the schedules. For activities that result in prolonged periods of dynamic work, a standardized work–rest schedule (e.g., 50 min on, 10 min off) is recommended (Åstrand et al. ). For occupations that involve manual labor with periods of heavy lifting, extreme temperatures, and work in cramped spaces (such as the offshore wind energy industry), even lower limits of approximately 30% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max are recommended. Other authors make their recommendations based on the %HR R , where 33% is often seen as the upper limit for an 8-h shift (Ilmarinen et al. ; Rodgers et al. ). Shorter or longer work periods require higher or lower acceptable limits, respectively. For a 12-h shift, for example, Rodgers et al. recommend an upper limit of 28% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . Knowing whether or not work–rest strategies exist for the offshore workplace would allow for a more accurate comparison to the above-mentioned limits found in the literature. Furthermore, the fact that 57% of the participants in the current study achieved HR values that at some point during the trainings exceeded their HR at the individual VT is at least indicative of the intense physical nature of offshore work (individual data not shown). This was particularly observed during Working at Heights where the climbing of ladders is involved. This represents a level of strain at almost 70% of the average maximal value for our study group (Table ). Comparison to other occupations The average oxygen consumptions during the training modules of 923.6 ml/min (10.5 ml/min/kg, Working at Heights)–1404.5 ml/min (18.4 ml/min/kg, Fire Awareness) is similar to that of employees in other physically demanding occupations. For example, workers of a municipal sanitation department also reached similar values (averaging 1103 ml/min for a period of 1 h) (Preisser et al. ). In this study, the authors found that refuse collection could be classified as heavy work with a high cardiovascular load, based on similar methods of field HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 measurements. Studies done on other physically demanding occupations (e.g., the slaughterhouse, healthcare and metal industries, agricultural workers and laborers) also produced similar results (Wultsch et al. ; Brighenti-Zogg et al. ). In all cases, however, the mean values presented here (%HR max and % \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max ) exceeded those found in the named literature. Physical requirements for offshore work The determination of physical fitness is used in the German and UK guidelines to distinguish whether the subject has the physical ability to work on an offshore wind turbine (DGAUM/AWMF , renewableUK ). The criterion for performance in the AWMF and renewableUK guidelines is based on heart rate and oxygen uptake, respectively. When looked at individually, half (8/16) of the participants in our study, with an average weight-adjusted maximum oxygen consumption of 34.5 ml/kg/min, did not achieve the oxygen uptake of 35 ml/kg/min required by the ‘renewableUK’ guideline. As well in our study, 21% of the participants would not have met the criteria for offshore work according to the current German guidelines (2.1 W/kg for men at a HR of 150 bpm) (Table ). Taken together, the results of our study show that the average fitness level of the group is at the lower end or below that what is required, despite its average young age. This could be due to a number of reasons, including sedentary lifestyle, smoking, (pre-)obesity, or a general lack of training. Because many individuals were well below their expected values for HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max (Table ), had a high-normal BMI, and were active smokers, it is likely a culmination of the above factors. Another reason for the poor performance could be due to the fact that oxygen intake in our case was directly determined from CPX and heart rate, while the renewableUK guideline recommends \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max determination via the so-called ‘Chester Step Test’ (CST). In the CST, the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max is calculated using an HR-based method; in contrast to CPX, there is no direct measurement of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max during the CST. Furthermore, the calculated maximum values used here for comparison are provided as health recommendations for the general population, not for those who are employed in physically strenuous occupations, where one would expect the criteria to be more stringent. Finding a balance between safety and fairness is a challenging task when drafting fitness to work guidelines for these employees. None of the participants in this study reported accidents or illness while offshore; however, most were either new to the industry or had not been in employment for very long. Nevertheless, despite the general lack of accident and illness statistics for this industry, our knowledge of the offshore environment and its dangers reinforces the need for pre-employment fitness testing, to best ensure the safety of individual employees, their colleagues, and the platforms. The few reports published for offshore oil and gas, and for the wind industry showed that the majority of illnesses were musculoskeletal in nature (Norman et al. ; Ponsonby et al. ; Thibodaux et al. ; Jürgens and Weinrich ). Musculoskeletal injuries are accountable for more missed workdays than any other form of illness for all occupations and have long been shown to be related to physical fitness (Rayson ). This, along with the results of this study, points to a lack of adequate fitness, which is especially relevant regarding the physically demanding nature of work in the offshore environment. Study limitations Due to logistical constraints, it was not possible to accompany the study participants to their actual offshore workplace. It is important to note, therefore, that, while our results do show high levels of strain for the offshore employees, data were collected during training modules. The tasks performed here, however, are comparable to those performed offshore, albeit simulated under the supervision of professionals. The assumption that this level of physical stress is transferable to work on the real platforms could lead to an underestimation of the actual physical workload because there are no additional safety measures (e.g., presence of trainers) as in the safety training examined here. The harsh conditions observed offshore (e.g., extreme temperature, weather) also have an impact on an individual employee’s performance. A significant amount of energy is needed to maintain body temperature homeostasis, thereby decreasing the working capacity of the employee. Our study was carried out either indoors or at relatively mild temperature and weather conditions. Due to the combination of time constraints of the training schedules and the complex nature of the examinations, it was not possible to recruit and test a larger number of participants. In addition, while only HR recordings during the performance of actual tasks were included in our calculations (i.e., prolonged periods of rest/breaks were excluded), there were indeed rather long stretches of inactivity between the modules. As a result, an entire day’s worth of recording amounted to anywhere from 1 to 4 h of useable data, which could not be extrapolated to a full 8–12-h workday. Also, despite the good level of correlation between HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 seen here and in other studies, it is important to consider that HR is an unspecific strain response. Factors such as the type of activity, psychologically stressful situations, and simultaneous heat or cold exposure can all affect its value (Sammito et al. ; Wilson and Corlett ). Although these effects are possibly negligible in a physically demanding work setting, it is not uncommon for employees in the offshore industry to be exposed to some or even all of these conditions during their rotations. Direct measurement of oxygen consumption at the offshore workplace would, therefore, be a better way to measure physical burden but based on the authors’ experience to date impractical. The average young age of our study group (35) is similar to that which has been observed for the offshore wind sector. As of 2012, roughly 65% of offshore employees worldwide were under 40 years of age, although there remained a small “core” of experienced workers (e.g., from other similar industries such as oil and gas) over the age of 51 (Willis ). Our final collective, consisting of 23 men (data from the 1 female subject were not included in statistical analyses), also reflects the gender distribution in the offshore wind energy industry. In Germany, approximately 19,000 people are employed in the offshore wind energy sector (FMEAE ). As of 2015, however, women made up not even 10% of all workers (Kubsova and Felchner ). The average BMI of 25 kg/m 2 places our study sample on the boundary between normal weight and pre-obesity. BMI, however, does not distinguish between muscles, bones, fat mass, and level of physical fitness. Nevertheless, many individuals in our cohort had a low–normal fitness level (based on CPX and predicted maximum values), suggestive of a lack of training. On average, however, our cohort was within the acceptable range of physical fitness for their age and sex categories, with a mean maximal load of 2.9 W/kg (Table ). In the literature, expected maximal values of 2.7 (± 0.4) W/kg are given for men between the ages of 30 and 39 (Prokop and Bachl ). Furthermore, our group collectively achieved a mean \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max that was 100.1% of the predicted average maximal value adjusted for age, sex, height, and weight (Gläser et al. ). It should be noted, however, that predicted maximal values depend heavily on the particular equation used. Calculations based on another formula recommended by Hansen et al. , for example, resulted in a \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max value that was only 93.5% of predicted. In any case, our cohort appeared to generally be at or marginally below the predicted values for both HR max and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . When considered individually, many of the participants achieved values that were in fact well below the expected (Table ). Offshore work in the wind energy industry is said to be physically taxing (DGAUM ; Parkes ), this has, however, not yet been critically reviewed. Based on HR analyses, our study shows that 65% of the participants achieved average HR work values that exceeded 30% of their HR R , a parameter which characterizes hard work, as described below. Furthermore, the mean HR during all trainings was approximately that of the mean HR at 35% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max (110.7 bpm). It can, therefore, be assumed that our subjects performed work at a level approximately 35% of their \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . According to the literature, “limit” values for acceptable levels of strain at work are anywhere between 33 and 50% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max for an 8-h shift (Wilson and Corlett ; Evans et al. ; Åstrand et al. ), depending on the number and length of rest periods built into the schedules. For activities that result in prolonged periods of dynamic work, a standardized work–rest schedule (e.g., 50 min on, 10 min off) is recommended (Åstrand et al. ). For occupations that involve manual labor with periods of heavy lifting, extreme temperatures, and work in cramped spaces (such as the offshore wind energy industry), even lower limits of approximately 30% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max are recommended. Other authors make their recommendations based on the %HR R , where 33% is often seen as the upper limit for an 8-h shift (Ilmarinen et al. ; Rodgers et al. ). Shorter or longer work periods require higher or lower acceptable limits, respectively. For a 12-h shift, for example, Rodgers et al. recommend an upper limit of 28% \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max . Knowing whether or not work–rest strategies exist for the offshore workplace would allow for a more accurate comparison to the above-mentioned limits found in the literature. Furthermore, the fact that 57% of the participants in the current study achieved HR values that at some point during the trainings exceeded their HR at the individual VT is at least indicative of the intense physical nature of offshore work (individual data not shown). This was particularly observed during Working at Heights where the climbing of ladders is involved. This represents a level of strain at almost 70% of the average maximal value for our study group (Table ). The average oxygen consumptions during the training modules of 923.6 ml/min (10.5 ml/min/kg, Working at Heights)–1404.5 ml/min (18.4 ml/min/kg, Fire Awareness) is similar to that of employees in other physically demanding occupations. For example, workers of a municipal sanitation department also reached similar values (averaging 1103 ml/min for a period of 1 h) (Preisser et al. ). In this study, the authors found that refuse collection could be classified as heavy work with a high cardiovascular load, based on similar methods of field HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 measurements. Studies done on other physically demanding occupations (e.g., the slaughterhouse, healthcare and metal industries, agricultural workers and laborers) also produced similar results (Wultsch et al. ; Brighenti-Zogg et al. ). In all cases, however, the mean values presented here (%HR max and % \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max ) exceeded those found in the named literature. The determination of physical fitness is used in the German and UK guidelines to distinguish whether the subject has the physical ability to work on an offshore wind turbine (DGAUM/AWMF , renewableUK ). The criterion for performance in the AWMF and renewableUK guidelines is based on heart rate and oxygen uptake, respectively. When looked at individually, half (8/16) of the participants in our study, with an average weight-adjusted maximum oxygen consumption of 34.5 ml/kg/min, did not achieve the oxygen uptake of 35 ml/kg/min required by the ‘renewableUK’ guideline. As well in our study, 21% of the participants would not have met the criteria for offshore work according to the current German guidelines (2.1 W/kg for men at a HR of 150 bpm) (Table ). Taken together, the results of our study show that the average fitness level of the group is at the lower end or below that what is required, despite its average young age. This could be due to a number of reasons, including sedentary lifestyle, smoking, (pre-)obesity, or a general lack of training. Because many individuals were well below their expected values for HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max (Table ), had a high-normal BMI, and were active smokers, it is likely a culmination of the above factors. Another reason for the poor performance could be due to the fact that oxygen intake in our case was directly determined from CPX and heart rate, while the renewableUK guideline recommends \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max determination via the so-called ‘Chester Step Test’ (CST). In the CST, the \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max is calculated using an HR-based method; in contrast to CPX, there is no direct measurement of \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_{2,\text{max} }}$$\end{document} V ˙ O 2 , max during the CST. Furthermore, the calculated maximum values used here for comparison are provided as health recommendations for the general population, not for those who are employed in physically strenuous occupations, where one would expect the criteria to be more stringent. Finding a balance between safety and fairness is a challenging task when drafting fitness to work guidelines for these employees. None of the participants in this study reported accidents or illness while offshore; however, most were either new to the industry or had not been in employment for very long. Nevertheless, despite the general lack of accident and illness statistics for this industry, our knowledge of the offshore environment and its dangers reinforces the need for pre-employment fitness testing, to best ensure the safety of individual employees, their colleagues, and the platforms. The few reports published for offshore oil and gas, and for the wind industry showed that the majority of illnesses were musculoskeletal in nature (Norman et al. ; Ponsonby et al. ; Thibodaux et al. ; Jürgens and Weinrich ). Musculoskeletal injuries are accountable for more missed workdays than any other form of illness for all occupations and have long been shown to be related to physical fitness (Rayson ). This, along with the results of this study, points to a lack of adequate fitness, which is especially relevant regarding the physically demanding nature of work in the offshore environment. Due to logistical constraints, it was not possible to accompany the study participants to their actual offshore workplace. It is important to note, therefore, that, while our results do show high levels of strain for the offshore employees, data were collected during training modules. The tasks performed here, however, are comparable to those performed offshore, albeit simulated under the supervision of professionals. The assumption that this level of physical stress is transferable to work on the real platforms could lead to an underestimation of the actual physical workload because there are no additional safety measures (e.g., presence of trainers) as in the safety training examined here. The harsh conditions observed offshore (e.g., extreme temperature, weather) also have an impact on an individual employee’s performance. A significant amount of energy is needed to maintain body temperature homeostasis, thereby decreasing the working capacity of the employee. Our study was carried out either indoors or at relatively mild temperature and weather conditions. Due to the combination of time constraints of the training schedules and the complex nature of the examinations, it was not possible to recruit and test a larger number of participants. In addition, while only HR recordings during the performance of actual tasks were included in our calculations (i.e., prolonged periods of rest/breaks were excluded), there were indeed rather long stretches of inactivity between the modules. As a result, an entire day’s worth of recording amounted to anywhere from 1 to 4 h of useable data, which could not be extrapolated to a full 8–12-h workday. Also, despite the good level of correlation between HR and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\dot {V}{{\text{O}}_2}$$\end{document} V ˙ O 2 seen here and in other studies, it is important to consider that HR is an unspecific strain response. Factors such as the type of activity, psychologically stressful situations, and simultaneous heat or cold exposure can all affect its value (Sammito et al. ; Wilson and Corlett ). Although these effects are possibly negligible in a physically demanding work setting, it is not uncommon for employees in the offshore industry to be exposed to some or even all of these conditions during their rotations. Direct measurement of oxygen consumption at the offshore workplace would, therefore, be a better way to measure physical burden but based on the authors’ experience to date impractical. The high physical demands of the offshore workplace are obvious and are, therefore, reflected in the physical fitness requirements put forth in the various guidelines. CPX or cycle ergometry testing only represent one (albeit fundamental) facet of the requirements of the offshore environment. Fitness must also be evaluated with respect to other aspects of work safety, including the individual ability to assess risk, to communicate effectively and work reliably with colleagues, and to handle unanticipated situations in a skilled and efficient manner. We present only a starting point for future studies, however, and suggest that more in-depth investigations should be performed, both to assess the physical strain experienced by offshore employees and to re-evaluate the current limits for physical work provided in the literature. Once achieved, it may be pertinent to incorporate task-specific fitness testing into evaluations of aptitude, as a means to better assess and prepare employees for their desired place of work. Such abilities cannot be measured in a laboratory or field setting with the use of specific equipment but are nonetheless vital to the smooth and safe functioning of the workplace. Appropriate training concepts must, therefore, also place emphasis on teamwork, for example, by including interactions with experienced colleagues in the modules. This study also demonstrates the need for a review and thorough evaluation of the eligibility criteria and their foundation, as formulated in the current guidelines and recommendations. |
Liver support systems and liver transplantation in acute liver failure | c964f339-88b4-4172-8893-388a8dfe0da7 | 11815598 | Surgical Procedures, Operative[mh] | INTRODUCTION The liver synthesizes more than 20 000 individual proteins and a subset of these are exclusively synthesized in liver and are often vital. Massive hepatic injury results in a multitude of serious complications that is due to the compromised synthetic function and from accumulation of endogenous protein‐bound toxins , and water‐soluble molecules. , , The consequence is a development of a series of complications with a clinical presentation that is multifaceted ranging from slightly altered conscious level with profound coagulopathy to a catastrophic failure of multiple organs, including development of uncontrollable systemic inflammation and fatal cerebral edema. , The definition of ALF requires four elements to be fulfilled: (1) signs of hepatocellular injury with elevated transaminases, (2) a high International Normalized Ratio (INR) together with a (3) compromised mentation and (4) in the absence of chronic liver disease. , Some patients develop HE rapidly after liver injury and the condition is termed hyperacute liver failure while others present with a 2 to 12 weeks interval from the liver damage to HE evolves which is termed acute and subacute liver failure. The causes of ALF encompass a wide variety of toxic, viral, metabolic aetiologies which often relies on the socioeconomic location of the patient. The aetiology of ALF is most often due to drug‐induced liver injury, and especially paracetamol (acetaminophen) overdose accounts for approximately 40% of the cases with ALF in Europe and the US. , , Autoimmune hepatitis, acute viral hepatitis (A and E), hypoxia and mushroom poisoning are other common etiological causes of ALF. In India, yet another etiological challenge has evolved namely poisoning with yellow phosphor that is used for suicidal intent and is now the most frequent etiologic cause of ALF there. , Vaccination programs, improved sanitation, restriction of over‐the‐counter medicine and earlier and prober management in the local emergency departments are among factors that has reduced the incidence of viral infections, drug‐induced ALF and allows earlier referral to a tertiary LT centre. , The liver regenerates continuously as part of normal homeostasis but it also has an enormous ability to regenerate even after severe acute injury with reestablishment of size, function by proliferation of parenchymal cells, and with a concomitant reversing of the proinflammatory state towards a more anti‐inflammatory condition. , However, the restorative capacity might be overwhelmed in some severe cases and lead to progression of multiorgan failure and death if not LT is performed. Although LT has improved the gloomy outlook for many patients with ALF, donor organ shortage, the risks of major surgery, and complications of life‐long immunosuppression provide an incentive to help achieve spontaneous recovery of the native liver by finding new ways to improve liver intensive therapy that allow time for spontaneous liver regeneration and survival. , , The number of patients surviving by spontaneous liver regeneration of the damaged liver has increased in the last four decades from 15%–20% to a current level of about 60%. , This success relies on better understanding of the pathophysiology of ALF and on continuous improvements in critical care. This review article provides an overview of the pathophysiology and recent advances in the medical management of ALF, with a focus on the rational use of extracorporeal liver support devices and emergency LT. PATHOPHYSIOLOGY OF ALF Patients with ALF are in a devastating medical condition which relates to two distinct and catastrophic pathophysiological events. One is due to hepatocellular necrosis that impairs the ability to synthetize urea. The consequence is that normal function of the intestine combined with the necrotic and failing liver becomes an ammonia‐producing organ system. , The release of ammonia from the failing liver to the systemic circulation easily enters the brain across the blood–brain barrier and impairs the normal function of various neurotransmitter cycles. If severe hyperammonemia persists for more than 1–2 days, it causes not only HE but also cortical (astrocyte) swelling that in severe cases leads to brain oedema and intracranial hypertension (ICH). , , , The second complication is caused by the dying, necrotic liver cells that release fragments of degrading large molecules, that is, DAMPs such as high‐mobility group box, DNA and RNA fragments, pro‐inflammatory cytokines, S100 proteins, hyaluronan fragments, purine metabolites, etc. This tissue decay not only causes release of inflammatory cytokines, chemokines, and ligands in non‐parenchyme cells, such as Kupffer cells, hepatic stellate cells, and various immune cells but also result in an overflow of DAMPs molecules into the systemic circulation, which activates monocytes and macrophages in the systemic circulation and result in a further release of pro‐inflammatory cytokines. , The DAMPs consist of both lipophilic and hydrophilic molecules and the massive release of these molecules quickly translates into a clinical picture that resembles septic shock. The combination of hyperammonemia together with the release of DAMPS together causes a ‘endogenous intoxication syndrome’ with a sepsis‐like condition with multiple organ failure and concomitant development of coma and brain oedema. In this fragile medical situation, it would be highly desirable to have a procedure that ensures rapid purification of the blood for both hydrophilic and lipophilic toxic substances to ameliorate the systemic inflammatory response and give either time for spontaneous liver regeneration or time to find a liver graft for the patient. MANAGEMENT AND LIVER SUPPORT SYSTEMS Management of patients with ALF aims to ensure or restore vital organ functions and halt or reverse development of multiorgan failure. In this context, an artificial liver assist device would be of great value. Most attempts to find an effective extracorporeal liver support device have been based upon dialysis techniques that may help reduce the level of circulating ammonia and DAMPS molecules. The clearance of water‐soluble toxins like ammonia and protein‐bound DAMPs molecules from the circulation has been studied in both biological and bioartificial liver support devices. All with rather disappointing results. , This latter issue is outside the scope of this review as we only focus on artificial liver support devices that include haemodialysis and CRRT, albumin dialysis and plasma exchange. 3.1 Haemodialysis/CRRT Haemodialysis of patients in liver failure removes smaller water‐soluble molecules such as ammonia. In a study of 41 patients with ALF that underwent 180 periods of intermittent haemodialysis (IHD), the stage of HE improved in more than 60% of the patients. However, IHD did not change survival rate compared to a historical control group (23% vs. 18% in the control group). Other subsequent non‐randomized, controlled studies in patients with ALF using CRRT reduced the ammonia level and the risk of cerebral oedema and episodes of ICH. , , A retrospective study that consisted of 340 patients with ALF identified 59 patients that were treated with IHD, 61 on CRRT and 220 patients that received no RRT showed that the ammonia decreased by 23%, 38% and 19% with IHD, CRRT and no RRT, respectively. Interestingly, the ammonia reduction with CRRT was significantly better compared to the no‐RRT group, and importantly the ammonia reduction using IHD was not better compared to the non‐RRT group. CRRT was associated with a reduction in mortality, whereas IHD was associated with a decrease in survival. Though no randomized, controlled trials have been done CRRT is now a central part of supportive care to reduce ammonia, lactate, body temperature and circulating proinflammatory cytokines but is still a matter of debate. Furthermore, CRRT ensures a high sodium level that counteracts the tendency towards water influx to the brain with formation of cerebral cortical oedema that contains high concentrations of the important organic osmolyte glutamine. , 3.2 Plasma exchange The idea to replace ‘toxic’ plasma from the patient with fresh frozen plasma to such an extent that it would replace some of the failing liver's capacity to remove toxic substances, was coined by Dr Tygstrup and termed high‐volume plasmapheresis. It was way before the importance of hyperammonemia and circulating DAMPs molecules were known. This procedure aimed to exchange the entire extracellular volume each day for three consecutive days based upon clearance calculations using bilirubin as a protein bound toxin marker. This treatment concept was tested during the 1990s and was for pragmatic reasons modified to replace only about 130 mL/kg body weight. A subsequent randomized, controlled clinical study of 182 adult patients with ALF, high‐volume plasma exchange was compared to standard medical treatment. The study demonstrated an improvement in overall hospital survival (58.7% vs. 47.8%; HR 0.56; 95% CI 0.36–0.86; p = .008). High‐volume plasmapheresis prior to LT did not improve survival compared with patients who received standard medical therapy alone (CI 0.37–3.98; p = .75). For those patients who fulfilled poor prognostic criteria but were not listed for LT due to contraindications (such as psychiatric disease or medical comorbidity) the survival was significantly higher for those who received high‐volume plasmapheresis as compared to those in receipt of standard medical care alone. Study of the impact of plasma exchange on the immunology were undertaken to explain these results. It was shown that the removal of both DAMPs molecules and a reduction of circulating ammonia were of central importance. In a subsequent study by Maiwall et al. confirmed that this simple procedure actual improves survival in adult patients with ALF by modulating the innate immune system. Another more recent controlled study in children with ALF also support the use of plasma exchange as it seems to improve survival even if the plasma exchanged per kilogram bodyweight was very low. The combination CRRT and plasmapheresis is attractive from the pathophysiological perspective provided here as it relieves the high circulating level of ammonia and DAMPS molecules at the same time and is currently used in several major liver failure centres around the world , , , , , (Figure ). Yet another way to combine removal of protein bound molecules with removal of water‐soluble toxins can be done more elegantly by albumin dialysis. , 3.3 Albumin dialysis Albumin dialysis has been shown to reduce the circulating levels of both water‐soluble and lipophilic substances such as ammonia, aromatic amino acids, creatinine, transaminases, bilirubin and a several inflammatory cytokines. This blood‐cleaning effect of albumin dialysis using the Molecular Adsorbent Recycling System (MARS) results in alleviation of both HE and systemic circulatory instability. , In a multicentre study from the U.S. Acute Liver Failure Study Group (USALFG), 104 ALF patients who received MARS were propensity‐scored matched to 416 controls. The multivariable conditional logistic regression adjusting for ALF aetiology (paracetamol: n = 248; vs non‐paracetamol: n = 272), age, vasopressor support, international normalized ratio, King's College Criteria, and propensity score showed that MARS significantly increased the transplant‐free survival. Only one prospective, randomized controlled trial has been performed in patients with ALF. This national French study, that is the Fulmar study were not able to demonstrate an overall survival benefit with MARS ( n = 53) compared to the control group ( n = 49) at 6 months and 1 year (i.e. 85% in the MARS arm vs. 76% in the control arm at 6 months, and 83% vs. 76% at 1 year, respectively). In patients with paracetamol‐related ALF, the 6‐month survival rate was 68.4% (CI 43.5%–86.4%) in the control group and 85.0% (CI 61.1%–96%) with MARS ( p = .46). Subgroup analyses of transplant‐free survival at 6 months (19% in MARS‐treated patients vs. 27% in controls) also found no significant difference. However, a secondary analysis showed that the dosing of MARS actual influences the outcome: survival was significantly improved in patients who received ≥3 MARS sessions ( n = 16), compared to those who received <3 sessions ( n = 88). , This significant signal suggests that MARS therapy is of value to patients with ALF who were not listed for liver transplantation (LT) just as seen in the plasma exchange studies. Hence, currently liver support with ≥3 MARS sessions may be considered for patients with ALF who are not candidates for emergency LT due to definitive or temporary contraindications. On the other hand, initiation of MARS therapy may indeed be important in countries where immediate LT is not an option due to shortage of donor organs. Haemodialysis/CRRT Haemodialysis of patients in liver failure removes smaller water‐soluble molecules such as ammonia. In a study of 41 patients with ALF that underwent 180 periods of intermittent haemodialysis (IHD), the stage of HE improved in more than 60% of the patients. However, IHD did not change survival rate compared to a historical control group (23% vs. 18% in the control group). Other subsequent non‐randomized, controlled studies in patients with ALF using CRRT reduced the ammonia level and the risk of cerebral oedema and episodes of ICH. , , A retrospective study that consisted of 340 patients with ALF identified 59 patients that were treated with IHD, 61 on CRRT and 220 patients that received no RRT showed that the ammonia decreased by 23%, 38% and 19% with IHD, CRRT and no RRT, respectively. Interestingly, the ammonia reduction with CRRT was significantly better compared to the no‐RRT group, and importantly the ammonia reduction using IHD was not better compared to the non‐RRT group. CRRT was associated with a reduction in mortality, whereas IHD was associated with a decrease in survival. Though no randomized, controlled trials have been done CRRT is now a central part of supportive care to reduce ammonia, lactate, body temperature and circulating proinflammatory cytokines but is still a matter of debate. Furthermore, CRRT ensures a high sodium level that counteracts the tendency towards water influx to the brain with formation of cerebral cortical oedema that contains high concentrations of the important organic osmolyte glutamine. , Plasma exchange The idea to replace ‘toxic’ plasma from the patient with fresh frozen plasma to such an extent that it would replace some of the failing liver's capacity to remove toxic substances, was coined by Dr Tygstrup and termed high‐volume plasmapheresis. It was way before the importance of hyperammonemia and circulating DAMPs molecules were known. This procedure aimed to exchange the entire extracellular volume each day for three consecutive days based upon clearance calculations using bilirubin as a protein bound toxin marker. This treatment concept was tested during the 1990s and was for pragmatic reasons modified to replace only about 130 mL/kg body weight. A subsequent randomized, controlled clinical study of 182 adult patients with ALF, high‐volume plasma exchange was compared to standard medical treatment. The study demonstrated an improvement in overall hospital survival (58.7% vs. 47.8%; HR 0.56; 95% CI 0.36–0.86; p = .008). High‐volume plasmapheresis prior to LT did not improve survival compared with patients who received standard medical therapy alone (CI 0.37–3.98; p = .75). For those patients who fulfilled poor prognostic criteria but were not listed for LT due to contraindications (such as psychiatric disease or medical comorbidity) the survival was significantly higher for those who received high‐volume plasmapheresis as compared to those in receipt of standard medical care alone. Study of the impact of plasma exchange on the immunology were undertaken to explain these results. It was shown that the removal of both DAMPs molecules and a reduction of circulating ammonia were of central importance. In a subsequent study by Maiwall et al. confirmed that this simple procedure actual improves survival in adult patients with ALF by modulating the innate immune system. Another more recent controlled study in children with ALF also support the use of plasma exchange as it seems to improve survival even if the plasma exchanged per kilogram bodyweight was very low. The combination CRRT and plasmapheresis is attractive from the pathophysiological perspective provided here as it relieves the high circulating level of ammonia and DAMPS molecules at the same time and is currently used in several major liver failure centres around the world , , , , , (Figure ). Yet another way to combine removal of protein bound molecules with removal of water‐soluble toxins can be done more elegantly by albumin dialysis. , Albumin dialysis Albumin dialysis has been shown to reduce the circulating levels of both water‐soluble and lipophilic substances such as ammonia, aromatic amino acids, creatinine, transaminases, bilirubin and a several inflammatory cytokines. This blood‐cleaning effect of albumin dialysis using the Molecular Adsorbent Recycling System (MARS) results in alleviation of both HE and systemic circulatory instability. , In a multicentre study from the U.S. Acute Liver Failure Study Group (USALFG), 104 ALF patients who received MARS were propensity‐scored matched to 416 controls. The multivariable conditional logistic regression adjusting for ALF aetiology (paracetamol: n = 248; vs non‐paracetamol: n = 272), age, vasopressor support, international normalized ratio, King's College Criteria, and propensity score showed that MARS significantly increased the transplant‐free survival. Only one prospective, randomized controlled trial has been performed in patients with ALF. This national French study, that is the Fulmar study were not able to demonstrate an overall survival benefit with MARS ( n = 53) compared to the control group ( n = 49) at 6 months and 1 year (i.e. 85% in the MARS arm vs. 76% in the control arm at 6 months, and 83% vs. 76% at 1 year, respectively). In patients with paracetamol‐related ALF, the 6‐month survival rate was 68.4% (CI 43.5%–86.4%) in the control group and 85.0% (CI 61.1%–96%) with MARS ( p = .46). Subgroup analyses of transplant‐free survival at 6 months (19% in MARS‐treated patients vs. 27% in controls) also found no significant difference. However, a secondary analysis showed that the dosing of MARS actual influences the outcome: survival was significantly improved in patients who received ≥3 MARS sessions ( n = 16), compared to those who received <3 sessions ( n = 88). , This significant signal suggests that MARS therapy is of value to patients with ALF who were not listed for liver transplantation (LT) just as seen in the plasma exchange studies. Hence, currently liver support with ≥3 MARS sessions may be considered for patients with ALF who are not candidates for emergency LT due to definitive or temporary contraindications. On the other hand, initiation of MARS therapy may indeed be important in countries where immediate LT is not an option due to shortage of donor organs. LIVER TRANSPLANTATION 4.1 Prognostic evaluation Factors associated with progression from acute liver injury to ALF, in those patients admitted to the emergency room or to hospital, are not clearly established. It is estimated that merely 30% of patients who develop an acute liver injury would progress to ALF. Aetiology, age, comorbidities and severity of the disease at time of presentation could affect the natural course of the disease. The main challenge is to early identify those patients who will not survive with the standard medical treatment alone and will need LT to ensure survival. Before the advent of LT, the overall mortality rate in patients with ALF ranged from 80% to 85%. Since the late 1980s, LT became the standard management of these patients. , Two prognostic criteria were rapidly developed based respectively on viral and paracetamol aetiology (the Clichy‐Villejuif and King's College criteria) , and since then they have been used to select ALF patients for transplantation worldwide. In the absence of LT, only few patients who meet these criteria survive (10%–20%). Various other criteria and/or variables are correlated with outcome such as the lactate level in paracetamol induced ALF, MELD score >32, APACHE II score >19, bilirubin >200 in non‐paracetamol aetiology, the levels of phosphate, Gc‐globulin, alpha‐foetoprotein, factor V and CT estimated volume size of the failing liver, with a wide range of sensitivity (60% to 100%) and specificity (67% to 100%). , Aetiology of ALF is a major determinant of outcome. The most common cause of ALF in the Northern part of Europe is paracetamol intoxication (suicidal attempt, accidental overdose). Data from the European Liver Transplant Registry (ELTR) in 4903 adult patients (>16 years) transplanted for ALF, paracetamol overdose represented merely 12% of the indications. , To notice the proportion of patients, who underwent LT for paracetamol‐induced ALF, did not change overtime and varied between 36% and 39% of the patients in the king's college experience and remained stable merely 40% in the last 15 years at the Paul Brousse hospital. , The spontaneous outcome with standard medical care of paracetamol and HAV induced ALF is >40% while those with some drug‐induced, unknown, autoimmune, HBV reactivation/flare and acute Budd‐Chiari aetiology have a very poor outcome in the absence of transplantation. , Hepatic encephalopathy grade III‐IV is another main determinant of outcome. Arterial ammonia levels of >100 μmol on admission represent an independent risk factor for the development of high‐grade HE and a level of >200 μmol predicts intracranial hypertension (ICH) and a poor outcome. , Bernal and co‐workers observed a significant fall in proportion of patients admitted with signs of ICH from 76% in 1984–1988 to 19.8% in 2004–2008 and this was associated with a significant fall in mortality respectively from 95% to 55%. However, in their experience, the development of ICH was strongly associated with increased mortality (73.6% among 648 patients with ICH) and 55% of their most recent patients (2003–2008) who developed ICH died. In our French series, of 384 patients with ALF, a significant decrease of patients admitted with HE grades IV was observed (76% prior to 1996 to 26% after 1996) and HE grade III‐IV was associated with increased mortality and poor spontaneous survival. , A third major determinant of outcome is the development of organ failures requiring multi‐organ support. This would not contraindicate the transplant but is associated with decreased spontaneous survival. Recently, Karvellas et al. reported on 624 patients with ALF from the acute liver failure study group (ALFSG) listed for LT. Overall, 398 (64%) underwent LT, 100 (16%) died without LT, and 126 (20%) recovered spontaneously. Those patients who died on the waiting list compared to those who underwent LT, had more severe multiorgan failure, reflected by higher use of vasopressors (65% vs. 22%), mechanical ventilation (84% vs. 57%), and renal replacement therapy (57% vs. 30%; p < .0001 for all). After adjusting for relevant covariates, age, paracetamol/acetaminophen aetiology, requirement for vasopressors, HE grade III/IV and MELD score were independently associated with death without receipt of LT. Among organ failures, the brain monitoring is still recommendable in the perioperative period of LT because of increased risk of brain herniation in patients with severe HE and persistence of hyperammonemia. The use of intracranial pressure (ICP) monitoring is not widespread among LT centres and is mainly due to the related risk of intracranial bleeding. In the absence of an ICP device, the risk of spontaneous intracranial bleeding is extremely rare. Non‐invasive techniques with transcranial Doppler ultrasound and pupillometry are preferred alternatives. , , The main therapeutic objective is to maintain an adequate mean aortic pressure and cerebral pressure perfusion to assure brain viability. LT remains a salvage therapy until a very advanced stage; a flat EEG pattern and/or clinical sign of brain herniation. Also, haemodynamic instability requiring very high doses of vasopressors can withhold the transplant procedure. In very rare situations, for patients with massive liver necrosis, severe acidosis, and haemodynamic instability despite vasopressors, a few LT centres have performed a total hepatectomy with a portocaval anastomosis to stabilize the patient while waiting for the graft which perhaps is working because the level of circulating DAMPs molecules decreases. Until a graft become available these patients are supported by CRRT for correction of pH, sodium, ammonia, lactate and can be supplemented with plasmapheresis or albumin dialysis for removal of protein bound toxins such as already circulating DAMPS molecules. Finally, the rapid progression of the disease once HE grades II occurs, and the low sensitivity of the currently available criteria have led to development of a more dynamic score. Once listed for emergency LT, some patients might still have a chance of spontaneous recovery with the current standard medical care with CRRT and liver support therapy that is most often seen in patients with hyperacute liver failure. Bernal et al. developed a decision support tool to estimate the of risk of death and support decision making in patients with paracetamol‐induced ALF based on the following variables: age, cardiovascular failure, Glasgow coma scale score, arterial pH, creatinine, INR and arterial lactate. The AUROC for 30‐day survival was 0·92 (95% CI 0·88–0·96) using the day 1 model and 0·93 (0·88–0·97) using the day 2 model. Ichaï et al. analysed in the French national ALF cohort, factors that were predictive of a spontaneous recovery. Hepatic encephalopathy grade 0–2, creatinine clearance≥60 mL/minute/1.73 m 2 , bilirubin level < 200 μmol/L and a factor V level >20% were strong predictors of spontaneous recovery in the paracetamol group while only a bilirubin level <200 μmol/L (OR, 10.38; 95% CI, 4.71–22.86) was predictive of a spontaneous recovery in non‐paracetamol ALF. In the ALFSG transplanted cohort, age below 40 years and paracetamol aetiology were associated with spontaneous survival. , Overall, clinical neurological monitoring together with serial measurements (2–3 times daily) of liver enzymes, coagulations, ammonia and lactate should be done until graft availability on site. 4.2 Results of liver transplantation in ALF There has been a constant improvement in the results of LT globally and for ALF particularly in the last decade. Survival after LT was reported by the European Liver Transplant registry (ELTR) for 90 000 liver transplants performed from 2007 till June 2022. The proportion of patients transplanted for ALF was 6.5% (5818 patients). The 1‐, 5‐ and 10‐year patient survival rates were respectively 78%, 71% and 64% compared to 87%, 75% and 60% for those transplanted for cirrhosis ( p = .0001) (Figure ). In the paediatric population aged 2–18 years ( n = 675 ALF patients, 13.6% during the same period), the 1‐, 5‐ and 10‐year patient survival rates were respectively 82%, 77% and 72% compared to 87%, 83% and 78% for those transplanted for cirrhosis ( p = .01). Similar improvement of the results was observed in the national French cohort reaching a 1‐year patient survival of 89% among those patients who underwent a LT. Several reasons might have contributed to this improvement: early transfer of patients with acute severe hepatitis to transplant centre before the occurrence of HE and a decrease of patients admitted with HE grade IV, early introduction of medical management of patients admitted to the liver ICU such as monitoring, prevention and treatment of early signs of ICH, fever control, microbiological surveillance and prophylaxis, renal replacement therapy modalities, liver support by albumin dialysis or plasma exchange and finally a technical improvement in the transplant surgery and the intra and perioperative care. In the OPTN/SRTR registry 2021 liver annual report, ALF represented 2.5% of the indications of transplant. The 5‐year patient survival after deceased donor liver transplant was 79.2% for ALF (Figure ). In the paediatric population, the 5‐year patient survival was with deceased donor recipients was 89.4% for ALF and 94.8% for biliary atresia. In the report of Karvellas and colleagues on 398 patients with ALF from the ALFSG who underwent a LT, the 1‐ and 3‐year post‐LT patient survival rates were 91% and 90%, respectively. Prognostic evaluation Factors associated with progression from acute liver injury to ALF, in those patients admitted to the emergency room or to hospital, are not clearly established. It is estimated that merely 30% of patients who develop an acute liver injury would progress to ALF. Aetiology, age, comorbidities and severity of the disease at time of presentation could affect the natural course of the disease. The main challenge is to early identify those patients who will not survive with the standard medical treatment alone and will need LT to ensure survival. Before the advent of LT, the overall mortality rate in patients with ALF ranged from 80% to 85%. Since the late 1980s, LT became the standard management of these patients. , Two prognostic criteria were rapidly developed based respectively on viral and paracetamol aetiology (the Clichy‐Villejuif and King's College criteria) , and since then they have been used to select ALF patients for transplantation worldwide. In the absence of LT, only few patients who meet these criteria survive (10%–20%). Various other criteria and/or variables are correlated with outcome such as the lactate level in paracetamol induced ALF, MELD score >32, APACHE II score >19, bilirubin >200 in non‐paracetamol aetiology, the levels of phosphate, Gc‐globulin, alpha‐foetoprotein, factor V and CT estimated volume size of the failing liver, with a wide range of sensitivity (60% to 100%) and specificity (67% to 100%). , Aetiology of ALF is a major determinant of outcome. The most common cause of ALF in the Northern part of Europe is paracetamol intoxication (suicidal attempt, accidental overdose). Data from the European Liver Transplant Registry (ELTR) in 4903 adult patients (>16 years) transplanted for ALF, paracetamol overdose represented merely 12% of the indications. , To notice the proportion of patients, who underwent LT for paracetamol‐induced ALF, did not change overtime and varied between 36% and 39% of the patients in the king's college experience and remained stable merely 40% in the last 15 years at the Paul Brousse hospital. , The spontaneous outcome with standard medical care of paracetamol and HAV induced ALF is >40% while those with some drug‐induced, unknown, autoimmune, HBV reactivation/flare and acute Budd‐Chiari aetiology have a very poor outcome in the absence of transplantation. , Hepatic encephalopathy grade III‐IV is another main determinant of outcome. Arterial ammonia levels of >100 μmol on admission represent an independent risk factor for the development of high‐grade HE and a level of >200 μmol predicts intracranial hypertension (ICH) and a poor outcome. , Bernal and co‐workers observed a significant fall in proportion of patients admitted with signs of ICH from 76% in 1984–1988 to 19.8% in 2004–2008 and this was associated with a significant fall in mortality respectively from 95% to 55%. However, in their experience, the development of ICH was strongly associated with increased mortality (73.6% among 648 patients with ICH) and 55% of their most recent patients (2003–2008) who developed ICH died. In our French series, of 384 patients with ALF, a significant decrease of patients admitted with HE grades IV was observed (76% prior to 1996 to 26% after 1996) and HE grade III‐IV was associated with increased mortality and poor spontaneous survival. , A third major determinant of outcome is the development of organ failures requiring multi‐organ support. This would not contraindicate the transplant but is associated with decreased spontaneous survival. Recently, Karvellas et al. reported on 624 patients with ALF from the acute liver failure study group (ALFSG) listed for LT. Overall, 398 (64%) underwent LT, 100 (16%) died without LT, and 126 (20%) recovered spontaneously. Those patients who died on the waiting list compared to those who underwent LT, had more severe multiorgan failure, reflected by higher use of vasopressors (65% vs. 22%), mechanical ventilation (84% vs. 57%), and renal replacement therapy (57% vs. 30%; p < .0001 for all). After adjusting for relevant covariates, age, paracetamol/acetaminophen aetiology, requirement for vasopressors, HE grade III/IV and MELD score were independently associated with death without receipt of LT. Among organ failures, the brain monitoring is still recommendable in the perioperative period of LT because of increased risk of brain herniation in patients with severe HE and persistence of hyperammonemia. The use of intracranial pressure (ICP) monitoring is not widespread among LT centres and is mainly due to the related risk of intracranial bleeding. In the absence of an ICP device, the risk of spontaneous intracranial bleeding is extremely rare. Non‐invasive techniques with transcranial Doppler ultrasound and pupillometry are preferred alternatives. , , The main therapeutic objective is to maintain an adequate mean aortic pressure and cerebral pressure perfusion to assure brain viability. LT remains a salvage therapy until a very advanced stage; a flat EEG pattern and/or clinical sign of brain herniation. Also, haemodynamic instability requiring very high doses of vasopressors can withhold the transplant procedure. In very rare situations, for patients with massive liver necrosis, severe acidosis, and haemodynamic instability despite vasopressors, a few LT centres have performed a total hepatectomy with a portocaval anastomosis to stabilize the patient while waiting for the graft which perhaps is working because the level of circulating DAMPs molecules decreases. Until a graft become available these patients are supported by CRRT for correction of pH, sodium, ammonia, lactate and can be supplemented with plasmapheresis or albumin dialysis for removal of protein bound toxins such as already circulating DAMPS molecules. Finally, the rapid progression of the disease once HE grades II occurs, and the low sensitivity of the currently available criteria have led to development of a more dynamic score. Once listed for emergency LT, some patients might still have a chance of spontaneous recovery with the current standard medical care with CRRT and liver support therapy that is most often seen in patients with hyperacute liver failure. Bernal et al. developed a decision support tool to estimate the of risk of death and support decision making in patients with paracetamol‐induced ALF based on the following variables: age, cardiovascular failure, Glasgow coma scale score, arterial pH, creatinine, INR and arterial lactate. The AUROC for 30‐day survival was 0·92 (95% CI 0·88–0·96) using the day 1 model and 0·93 (0·88–0·97) using the day 2 model. Ichaï et al. analysed in the French national ALF cohort, factors that were predictive of a spontaneous recovery. Hepatic encephalopathy grade 0–2, creatinine clearance≥60 mL/minute/1.73 m 2 , bilirubin level < 200 μmol/L and a factor V level >20% were strong predictors of spontaneous recovery in the paracetamol group while only a bilirubin level <200 μmol/L (OR, 10.38; 95% CI, 4.71–22.86) was predictive of a spontaneous recovery in non‐paracetamol ALF. In the ALFSG transplanted cohort, age below 40 years and paracetamol aetiology were associated with spontaneous survival. , Overall, clinical neurological monitoring together with serial measurements (2–3 times daily) of liver enzymes, coagulations, ammonia and lactate should be done until graft availability on site. Results of liver transplantation in ALF There has been a constant improvement in the results of LT globally and for ALF particularly in the last decade. Survival after LT was reported by the European Liver Transplant registry (ELTR) for 90 000 liver transplants performed from 2007 till June 2022. The proportion of patients transplanted for ALF was 6.5% (5818 patients). The 1‐, 5‐ and 10‐year patient survival rates were respectively 78%, 71% and 64% compared to 87%, 75% and 60% for those transplanted for cirrhosis ( p = .0001) (Figure ). In the paediatric population aged 2–18 years ( n = 675 ALF patients, 13.6% during the same period), the 1‐, 5‐ and 10‐year patient survival rates were respectively 82%, 77% and 72% compared to 87%, 83% and 78% for those transplanted for cirrhosis ( p = .01). Similar improvement of the results was observed in the national French cohort reaching a 1‐year patient survival of 89% among those patients who underwent a LT. Several reasons might have contributed to this improvement: early transfer of patients with acute severe hepatitis to transplant centre before the occurrence of HE and a decrease of patients admitted with HE grade IV, early introduction of medical management of patients admitted to the liver ICU such as monitoring, prevention and treatment of early signs of ICH, fever control, microbiological surveillance and prophylaxis, renal replacement therapy modalities, liver support by albumin dialysis or plasma exchange and finally a technical improvement in the transplant surgery and the intra and perioperative care. In the OPTN/SRTR registry 2021 liver annual report, ALF represented 2.5% of the indications of transplant. The 5‐year patient survival after deceased donor liver transplant was 79.2% for ALF (Figure ). In the paediatric population, the 5‐year patient survival was with deceased donor recipients was 89.4% for ALF and 94.8% for biliary atresia. In the report of Karvellas and colleagues on 398 patients with ALF from the ALFSG who underwent a LT, the 1‐ and 3‐year post‐LT patient survival rates were 91% and 90%, respectively. CONCLUSION ALF is a rare disease with multiple aetiologies that rapidly leads to clinical deterioration with development of multiorgan dysfunction and is due to high circulating levels of DAMPs, PAMPs and ammonia. Spontaneous recovery is more likely to occur in young patients (below 40 years) with hyperacute liver failure and with mild manifestations of HE. Early administration of artificial liver support including plasma exchange, albumin dialysis and CRRT improves survival without LT most significantly in those patients with a hyperacute or acute presentation. Spontaneous survival is rare in patients with a subacute liver failure and LT is almost always needed. For ALF patients that need LT for survival, there has been a tremendous improvement in the results during the last years worldwide. The post‐transplant survival is now reaching merely 90% which is only slightly inferior to LT for chronic liver disease. The authors do not have any disclosures to report. Is not relevant here. Is not relevant. |
Validation of SCORE2 on a sample from the Russian population and adaptation for the very high cardiovascular disease risk region | 74ec7e0a-ee75-4012-8a50-ea4473d6d74f | 11023576 | Internal Medicine[mh] | One of the most important achievements of clinical epidemiology is the development of risk-prediction models. The European Society of Cardiology (ESC) recommends the use of risk-prediction models to improve healthcare and prevention across the population . The main goal of the models is to identify people at increased risk of cardiovascular diseases (CVD), who might receive the greatest benefit from a preventive intervention . The basis for the development of all future prognostic models was formed in the Framingham Heart Study . The main issue that accompanies any risk assessment model is the accuracy of its predictions. The Framingham model of cardiovascular risk assessment was proved to be quite good in certain conditions and populations , but not accurate enough in other populations , especially European ones. That formed the basis for the creation of the European Risk Assessment Model SCORE . SCORE provides a direct evaluation of 10-year fatal cardiovascular risk in a format suited to the constraints of clinical practice . In particular, SCORE has been introduced and actively used in Russia for 20 years. It turned out to be very useful in practice since it provided an intuitive way to direct a patient’s treatment strategy . In turn, SCORE has two weaknesses. The first one is that SCORE predicts the risk of fatal cardiovascular (CV) events only. The second one is that SCORE was developed from cohorts recruited before 1986. Both of these aspects were resolved in a new model for predicting cardiovascular risks–SCORE2. SCORE2 (Systematic COronary Risk Evaluation 2) is a risk assessment scale for cardiovascular events, presented in 2021 by the European Society of Cardiology . SCORE2 evaluates the risk of a CV event over a 10-year period for people aged 40–69 years without a history of cardiovascular disease (CVD), chronic kidney disease, diabetes mellitus (DM), or familial hypercholesterolemia. Gender, age, smoking status, systolic pressure, total cholesterol, and HDL cholesterol are the risk factors in the SCORE2 model . SCORE2 also provides the interpretation of risk estimates in terms of 10-year CVD risk groups: low-to-moderate, high, very high. SCORE2 has a potential limitation for countries of the very high CVD risk region: for model training representative cohorts from this region were not used. Moreover, for validation within the very high CVD risk region, cohorts from only two cities (Kaunas, Lithuania and Novosibirsk, Russia from the HAPIEE Study) were used. These two cohorts could be hardly considered representative of the entire very high CVD risk region, thus we concluded that SCORE2 was not sufficiently validated for use in the very high CVD risk region . One more questionable aspect to be mentioned is that according to the simplified SCORE2 estimation chart presented in the original article on SCORE2, no men from the very high CVD risk region would be considered at “low-to-moderate 10-year CVD risk” at all . In other words, according to the SCORE2 estimation chart all men from countries of the very high CVD risk region would need to consider some treatment of CVD risk factors. Such strategy is hardly sensible and, if applied, would paralyze the healthcare systems of those countries. The aim of our study was to evaluate the accuracy of SCORE2 risk estimates for the Russian population and develop an adapted interpretation of SCORE2 risk estimates for clinical practice in Russia and other countries of very high risk region. Materials In our work we used data from the following three sources: the epidemiological study ESSE-RF (Epidemiology of Cardiovascular Diseases and their Risk Factors in Some Regions of the Russian Federation); the Moscow part of WHO MONICA Project (Multinational Monitoring of Trends and Determinants of Cardiovascular Diseases); Russian Fertility and Mortality Database (RusFMD). ESSE-RF Data on frequency of CV events in the Russian population for 2012–2019 were obtained from the ESSE-RF epidemiological study. The recruitment period for ESSE-RF was from September 15 th , 2012, to June 10 th , 2014. The response rate of the study was about 80%. The total number of ESSE-RF participants was 18037. For the analysis data from 7251 individuals aged 40–69 years without history of CVD and/or diabetes mellitus 2 type (all according to SCORE2 inclusion criteria) were used. The considered follow up period was 7 years (2013–2019). The period of 2020–2022 was not included into consideration in order to avoid the impact of the COVID-19 pandemic on the analysis. The information about CV events (CVD death, nonfatal myocardial infarction (MI) and/or acute cerebral circulation disorder) and non-CVD deaths among the study participants was collected during the follow up period. In total 234 (3.2%) CV events (fatal or non-fatal) and 135 (1.9%) competing events (non-CVD death) were obtained. ESSE-RF ethics statement . General protocol of the ESSE-RF study has been published previously and the study itself has been registered at clinicaltrials.gov (NCT02449460). The study was approved by three ethics committees: National Medical Research Center for Therapy and Preventive Medicine, Russian Cardiology Research-and-Production Complex, and Federal Almazov North-West Medical Research Centre. The ESSE-RF study was carried out in accordance with the ethical provisions of the Declaration of Helsinki and the National Standard of the Russian Federation “Good Clinical Practice (GCP)” GOST R52379- 2005. In order to comply with the above-mentioned laws, as well as Article 93 of Federal Law No. 323-FZ of November 21, 2011 "On the Fundamentals of Health Protection of Citizens of the Russian Federation", each subject signed a written consent to the processing of their personal data for the purposes of the study. Moscow MONICA The Moscow part of the WHO MONICA project 1988–1998 (Moscow MONICA) was used as a reference epidemiological study for ESSE-RF. The total number of Moscow MONICA participants was 2420. For the analysis data of 1663 individuals aged 40–64 were used. The considered follow up period for every participant was 7 years. Information about 141 (8.5%) fatal events among the study participants was collected during the follow up period. For the current work depersonalized data of Moscow MONICA were accessed June 1 st , 2023. RusFMD For country-level statistics for 1988–2019 the data from RusFMD were used. They were provided by the Center for Demographic Research of the Russian School of Economics and contain detailed indicators of birth rates and mortality of the population of Russian regions. All descriptive statistics presented in RusFMD were calculated on the basis of population statistics data received from the Federal State Statistics Service. In our work we used data from the following three sources: the epidemiological study ESSE-RF (Epidemiology of Cardiovascular Diseases and their Risk Factors in Some Regions of the Russian Federation); the Moscow part of WHO MONICA Project (Multinational Monitoring of Trends and Determinants of Cardiovascular Diseases); Russian Fertility and Mortality Database (RusFMD). ESSE-RF Data on frequency of CV events in the Russian population for 2012–2019 were obtained from the ESSE-RF epidemiological study. The recruitment period for ESSE-RF was from September 15 th , 2012, to June 10 th , 2014. The response rate of the study was about 80%. The total number of ESSE-RF participants was 18037. For the analysis data from 7251 individuals aged 40–69 years without history of CVD and/or diabetes mellitus 2 type (all according to SCORE2 inclusion criteria) were used. The considered follow up period was 7 years (2013–2019). The period of 2020–2022 was not included into consideration in order to avoid the impact of the COVID-19 pandemic on the analysis. The information about CV events (CVD death, nonfatal myocardial infarction (MI) and/or acute cerebral circulation disorder) and non-CVD deaths among the study participants was collected during the follow up period. In total 234 (3.2%) CV events (fatal or non-fatal) and 135 (1.9%) competing events (non-CVD death) were obtained. ESSE-RF ethics statement . General protocol of the ESSE-RF study has been published previously and the study itself has been registered at clinicaltrials.gov (NCT02449460). The study was approved by three ethics committees: National Medical Research Center for Therapy and Preventive Medicine, Russian Cardiology Research-and-Production Complex, and Federal Almazov North-West Medical Research Centre. The ESSE-RF study was carried out in accordance with the ethical provisions of the Declaration of Helsinki and the National Standard of the Russian Federation “Good Clinical Practice (GCP)” GOST R52379- 2005. In order to comply with the above-mentioned laws, as well as Article 93 of Federal Law No. 323-FZ of November 21, 2011 "On the Fundamentals of Health Protection of Citizens of the Russian Federation", each subject signed a written consent to the processing of their personal data for the purposes of the study. Moscow MONICA The Moscow part of the WHO MONICA project 1988–1998 (Moscow MONICA) was used as a reference epidemiological study for ESSE-RF. The total number of Moscow MONICA participants was 2420. For the analysis data of 1663 individuals aged 40–64 were used. The considered follow up period for every participant was 7 years. Information about 141 (8.5%) fatal events among the study participants was collected during the follow up period. For the current work depersonalized data of Moscow MONICA were accessed June 1 st , 2023. RusFMD For country-level statistics for 1988–2019 the data from RusFMD were used. They were provided by the Center for Demographic Research of the Russian School of Economics and contain detailed indicators of birth rates and mortality of the population of Russian regions. All descriptive statistics presented in RusFMD were calculated on the basis of population statistics data received from the Federal State Statistics Service. Data on frequency of CV events in the Russian population for 2012–2019 were obtained from the ESSE-RF epidemiological study. The recruitment period for ESSE-RF was from September 15 th , 2012, to June 10 th , 2014. The response rate of the study was about 80%. The total number of ESSE-RF participants was 18037. For the analysis data from 7251 individuals aged 40–69 years without history of CVD and/or diabetes mellitus 2 type (all according to SCORE2 inclusion criteria) were used. The considered follow up period was 7 years (2013–2019). The period of 2020–2022 was not included into consideration in order to avoid the impact of the COVID-19 pandemic on the analysis. The information about CV events (CVD death, nonfatal myocardial infarction (MI) and/or acute cerebral circulation disorder) and non-CVD deaths among the study participants was collected during the follow up period. In total 234 (3.2%) CV events (fatal or non-fatal) and 135 (1.9%) competing events (non-CVD death) were obtained. ESSE-RF ethics statement . General protocol of the ESSE-RF study has been published previously and the study itself has been registered at clinicaltrials.gov (NCT02449460). The study was approved by three ethics committees: National Medical Research Center for Therapy and Preventive Medicine, Russian Cardiology Research-and-Production Complex, and Federal Almazov North-West Medical Research Centre. The ESSE-RF study was carried out in accordance with the ethical provisions of the Declaration of Helsinki and the National Standard of the Russian Federation “Good Clinical Practice (GCP)” GOST R52379- 2005. In order to comply with the above-mentioned laws, as well as Article 93 of Federal Law No. 323-FZ of November 21, 2011 "On the Fundamentals of Health Protection of Citizens of the Russian Federation", each subject signed a written consent to the processing of their personal data for the purposes of the study. The Moscow part of the WHO MONICA project 1988–1998 (Moscow MONICA) was used as a reference epidemiological study for ESSE-RF. The total number of Moscow MONICA participants was 2420. For the analysis data of 1663 individuals aged 40–64 were used. The considered follow up period for every participant was 7 years. Information about 141 (8.5%) fatal events among the study participants was collected during the follow up period. For the current work depersonalized data of Moscow MONICA were accessed June 1 st , 2023. For country-level statistics for 1988–2019 the data from RusFMD were used. They were provided by the Center for Demographic Research of the Russian School of Economics and contain detailed indicators of birth rates and mortality of the population of Russian regions. All descriptive statistics presented in RusFMD were calculated on the basis of population statistics data received from the Federal State Statistics Service. In the classical setting of survival analysis, each patient may be assigned an "event" or "censored" status. It is generally assumed that censoring does not contain any information about the timing of a potential event. For ESSE-RF this assumption is not entirely valid in the following sense. Due to the difficulty of accessing personal data of ESSE-RF participants, some events of interest among the participants could have not been recorded. Namely, in some considerable number of cases ESSE-RF investigators were unable to contact either the participant or their representatives ever since the potential event had occurred to the participant. According to the established methodology of epidemiological studies, all such observations were logged into ESSE-RF as censored from the date of the last contact. For every censored observation in ESSE-RF it is impossible to say whether it was an actual non-informative censoring or a censoring within the problem identified in the previous paragraph. However, it was clear that the “censored” observations described above could not be processed regularly. We would refer to the phenomenon described in the paragraphs above as to “event information loss”. Assessment of cardiovascular events loss in ESSE-RF SCORE2 provides an estimate of a CV event occurrence risk. Therefore, to validate SCORE2 risk estimates using the CV event incidence among ESSE-RF participants, event information losses should be accounted for. It was assumed that information about primary CV events in ESSE-RF was lost with the same probability as information about deaths. This assumption would allow to take the estimate of the mortality information loss for the estimate of the primary CV event information loss. It may seem that mortality information loss could be estimated via comparison of mortality rates in ESSE-RF and Russian Fertility and Mortality database. However, mortality rates in ESSE-RF would a priori differ from the mortality rates of the entire Russian population due to inclusion criteria of the study . For example, subjects from marginal strata of the population, people with terminal stages of diseases or people with a high level of distrust in scientific research were not included in ESSE-RF. Therefore, a direct comparison of mortalities in ESSE-RF and RusFMD would not be meaningful. To estimate the mortality information loss in ESSE-RF using RusFMD, it was necessary to estimate the discrepancy between mortality rates in the entire Russian population and mortality rates in the Russian population meeting ESSE-RF inclusion criteria. This discrepancy would be called as the representativeness coefficient of ESSE-RF relative to the entire Russian population and denoted by the coefficient C . The exact interpretation of C is as follows: let C be equal to 2, then in the entire Russian population, the mortality rate is twice as high as in the Russian population meeting ESSE-RF inclusion criteria. The representativeness coefficient of ESSE-RF study was estimated using Moscow MONICA, since it was assumed that ESSE-RF and Moscow MONICA had the same representativeness coefficient relative to the corresponding populations. That was because they had the same inclusion criteria and data collection methodology. However, unlike ESSE-RF, all fatal events in Moscow MONICA within the first 7 years of the study were reliably identified because in 1988–1999 the legislation allowed checking the vital status of a person through the registry office. Therefore, all the difference in seven-year mortality in Moscow MONICA and RusFMD data for Moscow, if present, would be explained by the representativeness coefficient of Moscow MONICA relative to the entire population of Moscow. That allowed to estimate representativeness coefficient C by comparing mortality in Moscow MONICA and RusFMD. Since the representativeness coefficient of ESSE-RF was estimated, the mortality information loss could be estimated too. To do so, mortality rates of ESSE-RF were compared with RusFMD mortality rates, adjusted for the representativeness coefficient. Further, the information loss would be denoted by the coefficient B and interpreted as follows: let B be equal to 1.5, then in the Russian population meeting ESSE-RF inclusion criteria, the mortality was 1.5 times higher than observed mortality in ESSE-RF. Let us note again that mortality information was assumed to be lost with the same probability as information about primary CV events, so the estimate of B would be the estimate of primary CV event information loss too. The exact definitions and methods of estimating C and B are given in the “Statistical analysis” section. Methodology of comparison between ESSE-RF cardiovascular event rates and SCORE2 risk estimates To assess the accuracy of SCORE2 risk estimates for ESSE-RF participants, these estimates were compared with the observed CV event rates of ESSE-RF, adjusted for CV event information loss. The observed CV event rates of ESSE-RF, adjusted for CV event information loss were calculated as follows. Firstly, we took ESSE-RF data on risk factors and CV events and fitted a model to obtain estimates on 7-year risks of CV events for every ESSE-RF participant. These risk estimates were called the observed CV event rates in ESSE-RF. Secondly, the observed CV event rates were adjusted with the coefficient B to account for CV event information loss during the 7-year period. These adjusted rates were considered as the estimates of a CV event risk within 7-years for the ESSE-RF participant from the moment of entering the study. Finally, the 10-year risks of CV events were estimated with the use of risk multiplicativity assumption. 10-year risks of CV events were not estimated directly from ESSE-RF in order to avoid COVID19 pandemic influence on CV event rates. The resulting 10-year risk estimate for every participant would be referred as SCORE2-ESSE. The model is described in more details in statistical analysis section. The assessment of SCORE2 risk estimates for ESSE-RF participants was made by comparing SCORE2 and SCORE2-ESSE for every participant. Statistical analysis Statistical analysis was performed in R version 4.2.1. Continuous variables were described with median and quartiles: MED [Q25; Q75]. Representativeness coefficient The probabilities of death over a 7-year period for people aged m years in an entire population and in a specified part of the population were assumed to relate according to the formula P m < T p o p ≤ m + 7 ∣ T p o p > m = C ⋅ P m < T s p e c ≤ m + 7 ∣ T s p e c > m , where T pop denotes a random variable corresponding to the lifetime of a person in the entire population, T spec denotes a random variable corresponding to the lifetime of a person in a specified part of the population and C is a number (does not depend on m ). The specified part of the population consisted of those people from the entire population who met the inclusion criteria of the epidemiological study. The coefficient C was called the representativeness coefficient of the epidemiological study. The representativeness coefficient C of Moscow MONICA was estimated as follows. The probability on the left side of the formula above was estimated with RusFMD data for the corresponding period; the probability on the right side of the equality was estimated with the MONICA data; C was estimated as the ratio of them. The procedure was carried out for subpopulations aged 40 to 64 years at the beginning of the study (due to data availability and SCORE2 target population criteria) and 25 estimates of the coefficient C were obtained: C k ^ = P ^ k < T p o p ≤ k + 7 ∣ T p o p > k P ^ k < T s p e c ≤ k + 7 ∣ T s p e c > k , k = 40 , … 64. The numerator was estimated with a formula analogous to the Kaplan-Meier product limit estimate: P ^ k < T p o p ≤ k + 7 ∣ T p o p > k = 1 − 1 − q k + 6 … 1 − q k , where q k = P ^ ( T ≤ k + 1 ∣ T > k ) were obtained from RusFMD. The denominator was estimated from the MONICA data using classical Kaplan-Meier method. Note that such method for estimating probability in the denominator was valid due to the fact that Moscow MONICA had no event information loss. The final estimate of the coefficient C was the median C ^ = M e d C 40 ^ , … , C 64 ^ . Two estimates of representativeness coefficients were calculated: for men and for women. As mentioned above, ESSE-RF and MONICA were assumed to have the same representativeness coefficients, therefore after this procedure the estimates of ESSE-RF representativeness coefficients were obtained too. Information loss The relationship between documented mortality in a study (with mortality information loss present) and mortality in the entire population was assumed to be P m < T p o p ≤ m + 7 ∣ T p o p > m = C ⋅ B ⋅ P m < T e p i ≤ m + 7 ∣ T e p i > m , where T pop denotes a random variable corresponding to the lifetime of a person in the entire population, T epi denotes a random variable corresponding to a lifetime of a person in a population with the same mortality rates as are documented in the epidemiological study, C is the representativeness coefficient, B is the mortality information loss (both C and B do not depend on m ). B = P m < T p o p ≤ m + 7 ∣ T p o p > m C ⋅ P m < T e p i ≤ m + 7 ∣ T e p i > m The coefficient B for ESSE-RF was to be estimated. The probability in the numerator was estimated using RusFMD as described in the previous section. In the denominator the estimate for the representativeness coefficient C of ESSE-RF was obtained in the previous section. Then, P m < T e p i ≤ m + 7 ∣ T e p i > m was estimated with the data of ESSE-RF using the Kaplan-Meier method. This procedure was carried out for every age from 40 to 64 and the median over these estimates was taken as the final estimate of coefficient B for ESSE-RF. B k ^ = P ^ k < T p o p ≤ k + 7 ∣ T p o p > k C ^ ⋅ P ^ k < T e p i ≤ k + 7 ∣ T e p i > k , k = 40 , … 64 , B ^ = M e d B 40 ^ , … , B 64 ^ . Under the assumption that mortality information is lost with the same probability as information about CV events, B ^ also estimates the CV event information loss coefficient for ESSE-RF. Two estimates for information loss were calculated: for men and for women. Model The observed event rates (see definition in the section “Methodology of comparison between ESSE-RF cardiovascular event rates and SCORE2 risk estimates”) were obtained with the use of two Fine-Gray models of competing risks. One model was fitted to the data of ESSE-RF men, another to the data of ESSE-RF women. In both cases the considered participants were those who met the criteria of SCORE2 target population. The criteria were as follows: 40–69 years of age, without CVD and diabetes. The event of interest was composed of cardiovascular death, nonfatal myocardial infarction (MI) and/or nonfatal acute cerebral circulation disorder. Death from other causes was considered as a competing event. The list of predictors was identical to the original SCORE2 predicting model: age, systolic blood pressure, HDL, total cholesterol, smoking status, and interactions of these risk factors with age. The whole procedure was done in accordance with the original SCORE2 article. Adjustment for CV event information loss In Fine-Gray regression models of competing risks the relationship between a cause specific cumulative incidence function for a particular set of risk factors F i ( t ) and a baseline cumulative incidence function F 0 ( t ) is as follows: F i ( t ) = 1 − 1 − F 0 ( t ) exp β x i where x i is a vector of risk factors, β is a vector of corresponding coefficients. In case risk factors are normalized, the baseline cumulative incidence function reflects the probability of experiencing the event of interest for an “average” participant of the study. By an average participant is meant a subject whose risk factor levels are averaged among all the study participants. We then note that according to the section “Information loss” the information loss coefficient is calculated for the whole study (separately for men and women). This implies that the coefficient is intended for adjusting a risk of an average participant of the study. Hence, the adjusted estimate of the risk for an individual with a set of normalized risk factors x i should be obtained as follows: F ^ l ( t ) = 1 − 1 − B ^ ⋅ F 0 ^ ( t ) e x p β ^ x i , where F 0 ^ ( t ) is an estimate of the baseline cause specific incidence function and β ^ is an estimate of β derived from the original data. The effectiveness of the described adjustment procedure is demonstrated by simulation approach in . SCORE2 provides an estimate of a CV event occurrence risk. Therefore, to validate SCORE2 risk estimates using the CV event incidence among ESSE-RF participants, event information losses should be accounted for. It was assumed that information about primary CV events in ESSE-RF was lost with the same probability as information about deaths. This assumption would allow to take the estimate of the mortality information loss for the estimate of the primary CV event information loss. It may seem that mortality information loss could be estimated via comparison of mortality rates in ESSE-RF and Russian Fertility and Mortality database. However, mortality rates in ESSE-RF would a priori differ from the mortality rates of the entire Russian population due to inclusion criteria of the study . For example, subjects from marginal strata of the population, people with terminal stages of diseases or people with a high level of distrust in scientific research were not included in ESSE-RF. Therefore, a direct comparison of mortalities in ESSE-RF and RusFMD would not be meaningful. To estimate the mortality information loss in ESSE-RF using RusFMD, it was necessary to estimate the discrepancy between mortality rates in the entire Russian population and mortality rates in the Russian population meeting ESSE-RF inclusion criteria. This discrepancy would be called as the representativeness coefficient of ESSE-RF relative to the entire Russian population and denoted by the coefficient C . The exact interpretation of C is as follows: let C be equal to 2, then in the entire Russian population, the mortality rate is twice as high as in the Russian population meeting ESSE-RF inclusion criteria. The representativeness coefficient of ESSE-RF study was estimated using Moscow MONICA, since it was assumed that ESSE-RF and Moscow MONICA had the same representativeness coefficient relative to the corresponding populations. That was because they had the same inclusion criteria and data collection methodology. However, unlike ESSE-RF, all fatal events in Moscow MONICA within the first 7 years of the study were reliably identified because in 1988–1999 the legislation allowed checking the vital status of a person through the registry office. Therefore, all the difference in seven-year mortality in Moscow MONICA and RusFMD data for Moscow, if present, would be explained by the representativeness coefficient of Moscow MONICA relative to the entire population of Moscow. That allowed to estimate representativeness coefficient C by comparing mortality in Moscow MONICA and RusFMD. Since the representativeness coefficient of ESSE-RF was estimated, the mortality information loss could be estimated too. To do so, mortality rates of ESSE-RF were compared with RusFMD mortality rates, adjusted for the representativeness coefficient. Further, the information loss would be denoted by the coefficient B and interpreted as follows: let B be equal to 1.5, then in the Russian population meeting ESSE-RF inclusion criteria, the mortality was 1.5 times higher than observed mortality in ESSE-RF. Let us note again that mortality information was assumed to be lost with the same probability as information about primary CV events, so the estimate of B would be the estimate of primary CV event information loss too. The exact definitions and methods of estimating C and B are given in the “Statistical analysis” section. To assess the accuracy of SCORE2 risk estimates for ESSE-RF participants, these estimates were compared with the observed CV event rates of ESSE-RF, adjusted for CV event information loss. The observed CV event rates of ESSE-RF, adjusted for CV event information loss were calculated as follows. Firstly, we took ESSE-RF data on risk factors and CV events and fitted a model to obtain estimates on 7-year risks of CV events for every ESSE-RF participant. These risk estimates were called the observed CV event rates in ESSE-RF. Secondly, the observed CV event rates were adjusted with the coefficient B to account for CV event information loss during the 7-year period. These adjusted rates were considered as the estimates of a CV event risk within 7-years for the ESSE-RF participant from the moment of entering the study. Finally, the 10-year risks of CV events were estimated with the use of risk multiplicativity assumption. 10-year risks of CV events were not estimated directly from ESSE-RF in order to avoid COVID19 pandemic influence on CV event rates. The resulting 10-year risk estimate for every participant would be referred as SCORE2-ESSE. The model is described in more details in statistical analysis section. The assessment of SCORE2 risk estimates for ESSE-RF participants was made by comparing SCORE2 and SCORE2-ESSE for every participant. Statistical analysis was performed in R version 4.2.1. Continuous variables were described with median and quartiles: MED [Q25; Q75]. Representativeness coefficient The probabilities of death over a 7-year period for people aged m years in an entire population and in a specified part of the population were assumed to relate according to the formula P m < T p o p ≤ m + 7 ∣ T p o p > m = C ⋅ P m < T s p e c ≤ m + 7 ∣ T s p e c > m , where T pop denotes a random variable corresponding to the lifetime of a person in the entire population, T spec denotes a random variable corresponding to the lifetime of a person in a specified part of the population and C is a number (does not depend on m ). The specified part of the population consisted of those people from the entire population who met the inclusion criteria of the epidemiological study. The coefficient C was called the representativeness coefficient of the epidemiological study. The representativeness coefficient C of Moscow MONICA was estimated as follows. The probability on the left side of the formula above was estimated with RusFMD data for the corresponding period; the probability on the right side of the equality was estimated with the MONICA data; C was estimated as the ratio of them. The procedure was carried out for subpopulations aged 40 to 64 years at the beginning of the study (due to data availability and SCORE2 target population criteria) and 25 estimates of the coefficient C were obtained: C k ^ = P ^ k < T p o p ≤ k + 7 ∣ T p o p > k P ^ k < T s p e c ≤ k + 7 ∣ T s p e c > k , k = 40 , … 64. The numerator was estimated with a formula analogous to the Kaplan-Meier product limit estimate: P ^ k < T p o p ≤ k + 7 ∣ T p o p > k = 1 − 1 − q k + 6 … 1 − q k , where q k = P ^ ( T ≤ k + 1 ∣ T > k ) were obtained from RusFMD. The denominator was estimated from the MONICA data using classical Kaplan-Meier method. Note that such method for estimating probability in the denominator was valid due to the fact that Moscow MONICA had no event information loss. The final estimate of the coefficient C was the median C ^ = M e d C 40 ^ , … , C 64 ^ . Two estimates of representativeness coefficients were calculated: for men and for women. As mentioned above, ESSE-RF and MONICA were assumed to have the same representativeness coefficients, therefore after this procedure the estimates of ESSE-RF representativeness coefficients were obtained too. Information loss The relationship between documented mortality in a study (with mortality information loss present) and mortality in the entire population was assumed to be P m < T p o p ≤ m + 7 ∣ T p o p > m = C ⋅ B ⋅ P m < T e p i ≤ m + 7 ∣ T e p i > m , where T pop denotes a random variable corresponding to the lifetime of a person in the entire population, T epi denotes a random variable corresponding to a lifetime of a person in a population with the same mortality rates as are documented in the epidemiological study, C is the representativeness coefficient, B is the mortality information loss (both C and B do not depend on m ). B = P m < T p o p ≤ m + 7 ∣ T p o p > m C ⋅ P m < T e p i ≤ m + 7 ∣ T e p i > m The coefficient B for ESSE-RF was to be estimated. The probability in the numerator was estimated using RusFMD as described in the previous section. In the denominator the estimate for the representativeness coefficient C of ESSE-RF was obtained in the previous section. Then, P m < T e p i ≤ m + 7 ∣ T e p i > m was estimated with the data of ESSE-RF using the Kaplan-Meier method. This procedure was carried out for every age from 40 to 64 and the median over these estimates was taken as the final estimate of coefficient B for ESSE-RF. B k ^ = P ^ k < T p o p ≤ k + 7 ∣ T p o p > k C ^ ⋅ P ^ k < T e p i ≤ k + 7 ∣ T e p i > k , k = 40 , … 64 , B ^ = M e d B 40 ^ , … , B 64 ^ . Under the assumption that mortality information is lost with the same probability as information about CV events, B ^ also estimates the CV event information loss coefficient for ESSE-RF. Two estimates for information loss were calculated: for men and for women. Model The observed event rates (see definition in the section “Methodology of comparison between ESSE-RF cardiovascular event rates and SCORE2 risk estimates”) were obtained with the use of two Fine-Gray models of competing risks. One model was fitted to the data of ESSE-RF men, another to the data of ESSE-RF women. In both cases the considered participants were those who met the criteria of SCORE2 target population. The criteria were as follows: 40–69 years of age, without CVD and diabetes. The event of interest was composed of cardiovascular death, nonfatal myocardial infarction (MI) and/or nonfatal acute cerebral circulation disorder. Death from other causes was considered as a competing event. The list of predictors was identical to the original SCORE2 predicting model: age, systolic blood pressure, HDL, total cholesterol, smoking status, and interactions of these risk factors with age. The whole procedure was done in accordance with the original SCORE2 article. Adjustment for CV event information loss In Fine-Gray regression models of competing risks the relationship between a cause specific cumulative incidence function for a particular set of risk factors F i ( t ) and a baseline cumulative incidence function F 0 ( t ) is as follows: F i ( t ) = 1 − 1 − F 0 ( t ) exp β x i where x i is a vector of risk factors, β is a vector of corresponding coefficients. In case risk factors are normalized, the baseline cumulative incidence function reflects the probability of experiencing the event of interest for an “average” participant of the study. By an average participant is meant a subject whose risk factor levels are averaged among all the study participants. We then note that according to the section “Information loss” the information loss coefficient is calculated for the whole study (separately for men and women). This implies that the coefficient is intended for adjusting a risk of an average participant of the study. Hence, the adjusted estimate of the risk for an individual with a set of normalized risk factors x i should be obtained as follows: F ^ l ( t ) = 1 − 1 − B ^ ⋅ F 0 ^ ( t ) e x p β ^ x i , where F 0 ^ ( t ) is an estimate of the baseline cause specific incidence function and β ^ is an estimate of β derived from the original data. The effectiveness of the described adjustment procedure is demonstrated by simulation approach in . The probabilities of death over a 7-year period for people aged m years in an entire population and in a specified part of the population were assumed to relate according to the formula P m < T p o p ≤ m + 7 ∣ T p o p > m = C ⋅ P m < T s p e c ≤ m + 7 ∣ T s p e c > m , where T pop denotes a random variable corresponding to the lifetime of a person in the entire population, T spec denotes a random variable corresponding to the lifetime of a person in a specified part of the population and C is a number (does not depend on m ). The specified part of the population consisted of those people from the entire population who met the inclusion criteria of the epidemiological study. The coefficient C was called the representativeness coefficient of the epidemiological study. The representativeness coefficient C of Moscow MONICA was estimated as follows. The probability on the left side of the formula above was estimated with RusFMD data for the corresponding period; the probability on the right side of the equality was estimated with the MONICA data; C was estimated as the ratio of them. The procedure was carried out for subpopulations aged 40 to 64 years at the beginning of the study (due to data availability and SCORE2 target population criteria) and 25 estimates of the coefficient C were obtained: C k ^ = P ^ k < T p o p ≤ k + 7 ∣ T p o p > k P ^ k < T s p e c ≤ k + 7 ∣ T s p e c > k , k = 40 , … 64. The numerator was estimated with a formula analogous to the Kaplan-Meier product limit estimate: P ^ k < T p o p ≤ k + 7 ∣ T p o p > k = 1 − 1 − q k + 6 … 1 − q k , where q k = P ^ ( T ≤ k + 1 ∣ T > k ) were obtained from RusFMD. The denominator was estimated from the MONICA data using classical Kaplan-Meier method. Note that such method for estimating probability in the denominator was valid due to the fact that Moscow MONICA had no event information loss. The final estimate of the coefficient C was the median C ^ = M e d C 40 ^ , … , C 64 ^ . Two estimates of representativeness coefficients were calculated: for men and for women. As mentioned above, ESSE-RF and MONICA were assumed to have the same representativeness coefficients, therefore after this procedure the estimates of ESSE-RF representativeness coefficients were obtained too. The relationship between documented mortality in a study (with mortality information loss present) and mortality in the entire population was assumed to be P m < T p o p ≤ m + 7 ∣ T p o p > m = C ⋅ B ⋅ P m < T e p i ≤ m + 7 ∣ T e p i > m , where T pop denotes a random variable corresponding to the lifetime of a person in the entire population, T epi denotes a random variable corresponding to a lifetime of a person in a population with the same mortality rates as are documented in the epidemiological study, C is the representativeness coefficient, B is the mortality information loss (both C and B do not depend on m ). B = P m < T p o p ≤ m + 7 ∣ T p o p > m C ⋅ P m < T e p i ≤ m + 7 ∣ T e p i > m The coefficient B for ESSE-RF was to be estimated. The probability in the numerator was estimated using RusFMD as described in the previous section. In the denominator the estimate for the representativeness coefficient C of ESSE-RF was obtained in the previous section. Then, P m < T e p i ≤ m + 7 ∣ T e p i > m was estimated with the data of ESSE-RF using the Kaplan-Meier method. This procedure was carried out for every age from 40 to 64 and the median over these estimates was taken as the final estimate of coefficient B for ESSE-RF. B k ^ = P ^ k < T p o p ≤ k + 7 ∣ T p o p > k C ^ ⋅ P ^ k < T e p i ≤ k + 7 ∣ T e p i > k , k = 40 , … 64 , B ^ = M e d B 40 ^ , … , B 64 ^ . Under the assumption that mortality information is lost with the same probability as information about CV events, B ^ also estimates the CV event information loss coefficient for ESSE-RF. Two estimates for information loss were calculated: for men and for women. The observed event rates (see definition in the section “Methodology of comparison between ESSE-RF cardiovascular event rates and SCORE2 risk estimates”) were obtained with the use of two Fine-Gray models of competing risks. One model was fitted to the data of ESSE-RF men, another to the data of ESSE-RF women. In both cases the considered participants were those who met the criteria of SCORE2 target population. The criteria were as follows: 40–69 years of age, without CVD and diabetes. The event of interest was composed of cardiovascular death, nonfatal myocardial infarction (MI) and/or nonfatal acute cerebral circulation disorder. Death from other causes was considered as a competing event. The list of predictors was identical to the original SCORE2 predicting model: age, systolic blood pressure, HDL, total cholesterol, smoking status, and interactions of these risk factors with age. The whole procedure was done in accordance with the original SCORE2 article. In Fine-Gray regression models of competing risks the relationship between a cause specific cumulative incidence function for a particular set of risk factors F i ( t ) and a baseline cumulative incidence function F 0 ( t ) is as follows: F i ( t ) = 1 − 1 − F 0 ( t ) exp β x i where x i is a vector of risk factors, β is a vector of corresponding coefficients. In case risk factors are normalized, the baseline cumulative incidence function reflects the probability of experiencing the event of interest for an “average” participant of the study. By an average participant is meant a subject whose risk factor levels are averaged among all the study participants. We then note that according to the section “Information loss” the information loss coefficient is calculated for the whole study (separately for men and women). This implies that the coefficient is intended for adjusting a risk of an average participant of the study. Hence, the adjusted estimate of the risk for an individual with a set of normalized risk factors x i should be obtained as follows: F ^ l ( t ) = 1 − 1 − B ^ ⋅ F 0 ^ ( t ) e x p β ^ x i , where F 0 ^ ( t ) is an estimate of the baseline cause specific incidence function and β ^ is an estimate of β derived from the original data. The effectiveness of the described adjustment procedure is demonstrated by simulation approach in . In this part, the results of the assessment of SCORE2 risk estimates are provided as well as the estimates of representativeness coefficient and information loss for ESSE-RF. This is followed by the discussion. Representativeness coefficient and information loss for ESSE-RF As it was mentioned before, Moscow MONICA and ESSE-RF were assumed to have the same representativeness coefficients. The estimates calculated from Moscow MONICA and RusFMD data were equal to 1.63 for men and to 1.74 for women. Estimates of ESSE-RF CV event loss coefficients were obtained in accordance with “Information loss” section and were equal to 1.53 for men and to 1.44 for women. These coefficients meant that information about approximately every third primary CV event in ESSE-RF was lost. Accuracy assessment of SCORE2 risk estimates for ESSE-RF participants To begin the assessment, distributions of SCORE2 and SCORE2-ESSE were compared. presents the distributions of SCORE2 and SCORE2-ESSE for those ESSE-RF participants who met SCORE2 target population criteria. presents the distribution of the difference between SCORE2 and SCORE2-ESSE separately for men and for women. The difference between SCORE2 and SCORE2-ESSE were equal to 0.7 [0.0, 1.3] for men, and 2.6 [1.3, 5.6] for women. The quantiles of absolute difference between SCORE2 and SCORE2-ESSE were 0.9 [0.4, 1.5] for men and 2.7 [1.3, 5.6] for women. For men descriptive statistics on SCORE2 and SCORE2-ESSE suggested that the level of consistency between the two scales was high. That was displayed by the coincidence of SCORE2 and SCORE2-ESSE density functions, and concentration of distribution of the difference between SCORE2 and SCORE2-ESSE around zero. For women the consistency of the scales was significantly lower. In particular, the overestimation of SCORE2 over SCORE2-ESSE for women was detected via the distribution of the difference between SCORE2 and SCORE2-ESSE. The detected heterogeneity inferred that the SCORE2 risk estimates should be further assessed separately for men and for women. shows the correspondence of SCORE2 and SCORE2-ESSE for ESSE-RF participants. For men , the plot supported the idea of the consistency between SCORE2 and SCORE2-ESSE. The coefficients of mean calibration between SCORE2 and SCORE2-ESSE risks for men were 1.004 for slope (p = 0.20 for comparison with 1) and 0.738 for intercept (p<0.001 for comparison with 0). This meant that the estimates for men were consistent and differed by a small constant value for the entire risk range. For women , the overestimation of SCORE2 over SCORE2-ESSE was detected once again. The coefficients of mean calibration between SCORE2 and SCORE2-ESSE risks for women were 1.507 for slope (p<0.001 for comparison with 1) and 2.193 for intercept (p<0.001 for comparison with 0). Another aspect identified by the plot was the clustered structure of the points by smoking status for women. That implied the difference between the coefficients for smoking status in SCORE2 and SCORE2-ESSE models for women and confirmed the discrepancy between SCORE2 and SCORE2-ESSE for women. The consistency between SCORE2 and SCORE2-ESSE for men was also confirmed by the Bland-Altman plot, since the graph had the form of a narrow isosceles triangle symmetrical with respect to zero . For women, in turn, the asymmetrical Bland-Altman triangle confirmed the overestimation of SCORE2 over SCORE2-ESSE . Based on the information given in the section, we concluded that SCORE2 and SCORE2-ESSE were consistent for men participating in ESSE-RF. This meant that SCORE2 risk estimates for men participating in ESSE-RF were accurate. From that we derived that SCORE2 is an accurate predictive instrument for the population of Russian men. In contrast, we showed the inconsistency between SCORE2 and SCORE2-ESSE for women participating in ESSE-RF and therefore concluded that SCORE2 cannot be used as a predictive instrument for the population of Russian women. We will investigate this problem outside the scope of this work. Suggestions for adaptations of SCORE2 for clinical practice This section presents an adapted interpretation of SCORE2 risk estimates for clinical practice in Russia and other countries of very high risk region. Since we agreed on SCORE2 giving accurate risk estimates for Russian men but not for Russian women, a meaningful interpretation of SCORE2 risk estimates for Russian population is only possible for men. Therefore, in this section only the population of Russian men would be considered. Along with CV event risk estimates, SCORE2 has an interpretation in terms of 10-year CVD risk groups: low-to-moderate risk group, high risk group or very high risk group. In the men subgroup aged 40 to 49 years, the cutoff points for the risk groups were 2.5 and 7.5%. That is, men with SCORE2 of less than 2.5% were considered to be at low-to-moderate risk, those with SCORE2 between 2.5% and 7.5% were considered to be at high risk, and those with SCORE2 above 7.5% were considered to be at very high risk of CV event within 10 years. In the 50 to 69 year old subgroup, the corresponding cut-off values were 5 and 10%. From now on, these cutoff values would be referred as to original cutoff values or original cutoffs. The original cutoff values were considered questionable for two main reasons. The first reason was that according to the original cutoff values, 63% of men in ESSE-RF were assigned to a very high 10-year CVD risk group . From that it could be approximated that 2 of 3 Russian men would be considered at “very high” 10-year CVD risk and according to ESC guideline would be generally recommended CVD risk factor treatment. Such strategy would paralyze the public healthcare system. To compare, according to SCORE, 33% of men in ESSE-RF were considered at very high risk . The second reason was that the distribution of men into risk groups defined by the original cutoff values was explained more by high baseline risks of CV events in Russia than by levels of the risk factors of the participants. This can be illustrated with the following example. Example . A French (representative of the low risk region), a German (representative of the moderate risk region), and a Russian (representative of the very high risk region) men aged 51 years with the same SCORE2 risk factor values of current smoking, SBP 125, total cholesterol 4.8 and HDL cholesterol 1.2 visited a cardiologist. According to SCORE2 they would be assigned a risk of having a CV event within 10 years of 4.9%, 6.2%, and 12%, respectively. Based on the original cutoff values, the French would be at low-to-moderate risk, the German would be at high risk, and the Russian would be at very high risk of CV event within 10 years. This implies that according to ESC guidelines these three men would have different recommendations on further prevention and treatment of the SCORE2 risk factors. This is a questionable result given that their SCORE2 risk factors are identical. In our opinion, this situation would be incorrect from the point of view of preventive medicine and indicates the need of determining different cutoff values for each risk region. To approach the problem illustrated in the example, we proposed to define cutoff values for each risk region separately as follows. For a hypothetical individual, it is possible to observe how his SCORE2 risk estimate will change if the risk factor values are fixed, and the risk region is changing. In the example considered, a risk of 4.9% in the low risk region would correspond to a risk of 6.2% in the moderate risk region and 12% in the very high risk region. Cutoff values for the very high CVD risk region were calculated using the correspondence between the very high CVD risk region and the moderate CVD risk region. The moderate risk region was taken as a reference mainly because the resulting distribution of ESSE-RF men into CVD risk groups resembled the distribution given by SCORE. According to the procedure described in the previous paragraph, original cutoff value of 2.5% risk for the moderate risk region converted into 4.5% for the very high risk region. Similarly, the conversions were 5% into 9%, 7.5% into 14% and 10% into 18%. The converted version of original cutoff values is proposed to be used in the countries of the very high CVD risk region . could be referred to as an adapted interpretation of SCORE2 risk estimates for countries of the very high risk region. The distribution of ESSE-RF men into risk groups defined by the proposed cutoff values was as follows . For reference, the distribution of men by risk groups according to SCORE is presented again . The chart for calculating SCORE2 from the original article is presented . We present a recolored version of the chart to be used in the population of men in Russia and other countries of the very high CVD risk region . The new coloring is defined by the proposed cutoff values. We note separately that the methodology presented in the paper would allow validation of any other risk scale for the Russian population such as, for example, American College of Cardiology/American Heart Association . In turn, different validated risk scales would enable a comparison of treatment strategies based on SCORE2 with the treatment strategies based on other commonly utilized risk assessment tools. Such comparisons would provide a more complete understanding of SCORE2 as well as its strengths and limitations, thereby improving practical usefulness. As it was mentioned before, Moscow MONICA and ESSE-RF were assumed to have the same representativeness coefficients. The estimates calculated from Moscow MONICA and RusFMD data were equal to 1.63 for men and to 1.74 for women. Estimates of ESSE-RF CV event loss coefficients were obtained in accordance with “Information loss” section and were equal to 1.53 for men and to 1.44 for women. These coefficients meant that information about approximately every third primary CV event in ESSE-RF was lost. To begin the assessment, distributions of SCORE2 and SCORE2-ESSE were compared. presents the distributions of SCORE2 and SCORE2-ESSE for those ESSE-RF participants who met SCORE2 target population criteria. presents the distribution of the difference between SCORE2 and SCORE2-ESSE separately for men and for women. The difference between SCORE2 and SCORE2-ESSE were equal to 0.7 [0.0, 1.3] for men, and 2.6 [1.3, 5.6] for women. The quantiles of absolute difference between SCORE2 and SCORE2-ESSE were 0.9 [0.4, 1.5] for men and 2.7 [1.3, 5.6] for women. For men descriptive statistics on SCORE2 and SCORE2-ESSE suggested that the level of consistency between the two scales was high. That was displayed by the coincidence of SCORE2 and SCORE2-ESSE density functions, and concentration of distribution of the difference between SCORE2 and SCORE2-ESSE around zero. For women the consistency of the scales was significantly lower. In particular, the overestimation of SCORE2 over SCORE2-ESSE for women was detected via the distribution of the difference between SCORE2 and SCORE2-ESSE. The detected heterogeneity inferred that the SCORE2 risk estimates should be further assessed separately for men and for women. shows the correspondence of SCORE2 and SCORE2-ESSE for ESSE-RF participants. For men , the plot supported the idea of the consistency between SCORE2 and SCORE2-ESSE. The coefficients of mean calibration between SCORE2 and SCORE2-ESSE risks for men were 1.004 for slope (p = 0.20 for comparison with 1) and 0.738 for intercept (p<0.001 for comparison with 0). This meant that the estimates for men were consistent and differed by a small constant value for the entire risk range. For women , the overestimation of SCORE2 over SCORE2-ESSE was detected once again. The coefficients of mean calibration between SCORE2 and SCORE2-ESSE risks for women were 1.507 for slope (p<0.001 for comparison with 1) and 2.193 for intercept (p<0.001 for comparison with 0). Another aspect identified by the plot was the clustered structure of the points by smoking status for women. That implied the difference between the coefficients for smoking status in SCORE2 and SCORE2-ESSE models for women and confirmed the discrepancy between SCORE2 and SCORE2-ESSE for women. The consistency between SCORE2 and SCORE2-ESSE for men was also confirmed by the Bland-Altman plot, since the graph had the form of a narrow isosceles triangle symmetrical with respect to zero . For women, in turn, the asymmetrical Bland-Altman triangle confirmed the overestimation of SCORE2 over SCORE2-ESSE . Based on the information given in the section, we concluded that SCORE2 and SCORE2-ESSE were consistent for men participating in ESSE-RF. This meant that SCORE2 risk estimates for men participating in ESSE-RF were accurate. From that we derived that SCORE2 is an accurate predictive instrument for the population of Russian men. In contrast, we showed the inconsistency between SCORE2 and SCORE2-ESSE for women participating in ESSE-RF and therefore concluded that SCORE2 cannot be used as a predictive instrument for the population of Russian women. We will investigate this problem outside the scope of this work. This section presents an adapted interpretation of SCORE2 risk estimates for clinical practice in Russia and other countries of very high risk region. Since we agreed on SCORE2 giving accurate risk estimates for Russian men but not for Russian women, a meaningful interpretation of SCORE2 risk estimates for Russian population is only possible for men. Therefore, in this section only the population of Russian men would be considered. Along with CV event risk estimates, SCORE2 has an interpretation in terms of 10-year CVD risk groups: low-to-moderate risk group, high risk group or very high risk group. In the men subgroup aged 40 to 49 years, the cutoff points for the risk groups were 2.5 and 7.5%. That is, men with SCORE2 of less than 2.5% were considered to be at low-to-moderate risk, those with SCORE2 between 2.5% and 7.5% were considered to be at high risk, and those with SCORE2 above 7.5% were considered to be at very high risk of CV event within 10 years. In the 50 to 69 year old subgroup, the corresponding cut-off values were 5 and 10%. From now on, these cutoff values would be referred as to original cutoff values or original cutoffs. The original cutoff values were considered questionable for two main reasons. The first reason was that according to the original cutoff values, 63% of men in ESSE-RF were assigned to a very high 10-year CVD risk group . From that it could be approximated that 2 of 3 Russian men would be considered at “very high” 10-year CVD risk and according to ESC guideline would be generally recommended CVD risk factor treatment. Such strategy would paralyze the public healthcare system. To compare, according to SCORE, 33% of men in ESSE-RF were considered at very high risk . The second reason was that the distribution of men into risk groups defined by the original cutoff values was explained more by high baseline risks of CV events in Russia than by levels of the risk factors of the participants. This can be illustrated with the following example. Example . A French (representative of the low risk region), a German (representative of the moderate risk region), and a Russian (representative of the very high risk region) men aged 51 years with the same SCORE2 risk factor values of current smoking, SBP 125, total cholesterol 4.8 and HDL cholesterol 1.2 visited a cardiologist. According to SCORE2 they would be assigned a risk of having a CV event within 10 years of 4.9%, 6.2%, and 12%, respectively. Based on the original cutoff values, the French would be at low-to-moderate risk, the German would be at high risk, and the Russian would be at very high risk of CV event within 10 years. This implies that according to ESC guidelines these three men would have different recommendations on further prevention and treatment of the SCORE2 risk factors. This is a questionable result given that their SCORE2 risk factors are identical. In our opinion, this situation would be incorrect from the point of view of preventive medicine and indicates the need of determining different cutoff values for each risk region. To approach the problem illustrated in the example, we proposed to define cutoff values for each risk region separately as follows. For a hypothetical individual, it is possible to observe how his SCORE2 risk estimate will change if the risk factor values are fixed, and the risk region is changing. In the example considered, a risk of 4.9% in the low risk region would correspond to a risk of 6.2% in the moderate risk region and 12% in the very high risk region. Cutoff values for the very high CVD risk region were calculated using the correspondence between the very high CVD risk region and the moderate CVD risk region. The moderate risk region was taken as a reference mainly because the resulting distribution of ESSE-RF men into CVD risk groups resembled the distribution given by SCORE. According to the procedure described in the previous paragraph, original cutoff value of 2.5% risk for the moderate risk region converted into 4.5% for the very high risk region. Similarly, the conversions were 5% into 9%, 7.5% into 14% and 10% into 18%. The converted version of original cutoff values is proposed to be used in the countries of the very high CVD risk region . could be referred to as an adapted interpretation of SCORE2 risk estimates for countries of the very high risk region. The distribution of ESSE-RF men into risk groups defined by the proposed cutoff values was as follows . For reference, the distribution of men by risk groups according to SCORE is presented again . The chart for calculating SCORE2 from the original article is presented . We present a recolored version of the chart to be used in the population of men in Russia and other countries of the very high CVD risk region . The new coloring is defined by the proposed cutoff values. We note separately that the methodology presented in the paper would allow validation of any other risk scale for the Russian population such as, for example, American College of Cardiology/American Heart Association . In turn, different validated risk scales would enable a comparison of treatment strategies based on SCORE2 with the treatment strategies based on other commonly utilized risk assessment tools. Such comparisons would provide a more complete understanding of SCORE2 as well as its strengths and limitations, thereby improving practical usefulness. SCORE2 was developed for more effective identification of individuals with increased risk of CVD. This scale considers the impact of competing risks by non-CVD deaths and provides risk estimates for the combined outcome of fatal and non-fatal CVD event . However, SCORE2 has one potential limitation for countries of the very high CVD risk region. The SCORE2 model was trained and validated on cohorts mostly in European regions and populations at low or moderate risk of CVD, whereas representative data from high and very high risk regions were not used . Kasim S.S. et al. considered the use of SCORE2 for Asian region and indicated that SCORE2 risk model would need to be recalibrated before it could be employed in a different population . We decided to validate SCORE2 for the Russian population using data from the ESSE-RF epidemiological study. The results demonstrated that SCORE2 risk estimates were accurate for Russian men. Hence, the recalibration of SCORE2 before the use in the population of Russian men is not needed. At the same time, SCORE2 risk estimates for Russian women turned out to be overestimated as well as some of the risk factors included in the SCORE2 model were misadjusted. It was supposed that the inaccuracy of SCORE2 risk estimates for Russian women was related to the difference in patterns of the risk factors and their impact to morbidity and mortality between European and Russian women . Therefore, SCORE2 cannot be considered as an accurate predictive instrument for Russian women and will be further investigated outside the scope of this paper. SCORE2 was included in the ESC Guidelines for Cardiovascular Disease Prevention in Clinical Practice 2021, that provided preventive recommendations at the population and individual level. At an individual level, the guideline provided a step-by-step approach to lifestyle modification and appropriate prescription of treatment based on SCORE2 risk estimate and comorbidities . For example, treatment of atherosclerotic CVD risk factors was recommended in apparently healthy people with SCORE2≥7.5% for age under 50 and SCORE2≥10% for age 50–69 . Since the publication of the guidelines, there have been numerous articles discussing the need of the replacement of SCORE with SCORE2. Researchers were dissatisfied with the fact that based on SCORE2, a significant proportion of the population is considered at the very high risk instead of the previous low–moderate or high risk according to SCORE. That would lead to the unmanageable load in the primary healthcare systems of the countries of SCORE2 very high risk region . In our study, an enormous fraction of 63% of ESSE-RF men among those who met SCORE2 target population criteria were classified into a very high 10-year CVD risk group. The problem outlined in the paragraph above formed the objective of modifying the interpretation of SCORE2 risk estimates for the countries of very high risk region. As a result of mathematical modeling the adapted interpretation of SCORE2 risk estimates for men was proposed. It was suggested to attribute men from very high risk region to the group of very high 10-year CVD risk with SCORE2 ≥14% (<50 years) and ≥18% (50–69 years). In accordance with the adapted interpretation the fraction of men in ESSE-RF in “low-to-moderate” 10-year CVD risk increased from 2% to 18% the fraction of men in ESSE-RF with very high CVD risk dropped from 63% to 20%. To obtain adjusted CV event rates a set of assumptions was made. First, it was assumed that both representativeness and information loss coefficients were constant across age groups. Second, it was assumed that information about CV events was lost with the same probability as information about deaths. Third, a risk multiplicativity assumption was used to derive estimates of 10-year risk from estimates of 7-year risk. We considered these assumptions to be logically valid; however, since they could not be formally verified from the data, they are mentioned as limitations of the study. Another limitation is related to the recolored version of the SCORE2 calculation chart for men in the very high CVD risk region. In clinical practice of a particular country of the very high CVD risk region, the chart could be used only upon the validation of SCORE2 in its population. In other words, SCORE2 risk groups could only be meaningful if the accuracy of SCORE2 risk estimates for the particular population was assessed. The accuracy of SCORE2 for the population of Russian men was proved. SCORE2 for Russian women was proved to be inaccurate. The adapted interpretation of SCORE2 for men populations of Russia and other countries of very high risk region was proposed. According to the interpretation, the fraction of men in ESSE-RF in “low-to-moderate” 10-year CVD risk increased from 2% to 18% and the fraction of men in “very high” CVD risk decreased from 63% to 20% as compared to the original interpretation. The proposed interpretation would allow a more personalized approach to CVD treatment and optimize the burden on primary healthcare in the very high risk region countries. S1 Appendix (DOCX) S1 Table Simulated data for the number of CVD events after loss, the number of competing events after loss and the loss coefficient estimate. (XLSX) S2 Table Calibration coefficients for adjusted and unadjusted estimates. (XLSX) S1 Data (DOCX) |
Optimal threshold of portal pressure gradient for patients with ascites after covered TIPS: a multicentre cohort study | fca89449-6257-4531-8035-836be8c8e989 | 11846747 | Surgical Procedures, Operative[mh] | Ascites is the most prevalent decompensation event in patients with cirrhosis, affecting > 50% of such patients and exhibiting an estimated annual incidence of 5–10% in those with compensated cirrhosis . The occurrence of ascites results in a 5-year mortality rate of approximately 50–70% , and once it becomes refractory, that it cannot be mobilized or whose early recurrence cannot be satisfactorily prevented by medical therapy, median survival is reduced to 6 months . Transjugular intrahepatic portosystemic shunt (TIPS) emerged as an effective treatment for ascites. Notably, studies underscore the superior efficacy of TIPS over paracentesis in controlling recurrent and refractory ascites ; and the latest randomised-controlled trial (RCT) concluded that TIPS with covered stents improved transplant-free survival compared with large volume paracentesis (LVP) in patients with recurrent ascites . Current guidelines advocate reducing portal pressure gradient (PPG) to below 12 mmHg or decreased by > 50% after TIPS procedure. However, this standard is primarily used to treat patients with variceal bleeding, there is a lack of consensus on optimal post-TIPS PPG for improving survival, and the optimal PPG decrease to control medically recurrent and refractory ascites needs to be clarified . In addition, the above PPG standard was determined using bare stents, while TIPS with covered stents had better clinical outcomes than that with bare stents; therefore, the standard may not be applicable to covered stents era . The exploration for the appropriate PPG in patients with ascites in only a few studies showing a relationship between post-TIPS PPG and ascites clinical response . Therefore, the determination of the optimal post-TIPS PPG cut-off in patients undergoing covered stent treatment for ascites necessitates further confirmation . We designed this national multicentre cohort study aiming to identify the potential thresholds for post-TIPS PPG that could benefit patients in terms of survival and ascites control. Patients This retrospective study included patients who underwent TIPS for recurrent or refractory ascites at four centers between January 2015 and December 2022. All patients received etiological treatment with continued antiviral therapy, alcohol abstinence, or ursodeoxycholic acid depending on the cause of cirrhosis, and receive regular diuretics (spironolactone and furosemide) for ascites, with additional therapeutic paracentesis as needed. Each center had a gastroenterology and hepatology unit with experienced clinicians performing TIPS procedures. The TIPS procedure was carried out using a standard method as previously described under local anesthesia . Three different diameters of covered stents (6, 8, and 10 mm) were used, the selection of stent diameter is mainly based on the operator's experience and patient’s characteristics of pre-TIPS PPG, liver function, and the availability of stents. PPG was obtained preoperatively (pre-TIPS PPG) and immediately after stent placement (post-TIPS PPG). PPG was measured as the difference between the pressure measured in the portal vein and inferior vena cava. Inclusion criteria were as follows: (1) a diagnosis of cirrhosis (based on clinical signs, laboratory and imaging tests, or liver biopsy) and (2) TIPS procedure for ascites. The exclusion criteria were as follows: (1) age < 14 or > 75 years; (2) not recurrent or refractory ascites; (3) Child–Pugh score > 13; (4) TIPS with uncovered stents, unrelieved bile duct obstruction, hepatocellular carcinoma, or another advanced tumor; (5) no treatment with adequate doses of diuretics; (6) previous TIPS implantation; (7) portocaval surgery or splenectomy; and (8) recurrence of HE without an identifiable trigger (Supplementary Fig. 1). Recurrent ascites were defined as ≥ 3 LVP within 1 year , and refractory ascites were defined per the Asia–Pacific Association for the Study of the Liver as ‘ascites that cannot be mobilized or whose early recurrence cannot be satisfactorily prevented by medical therapy’. All patients received diuretic treatment (at least 100 mg of spironolactone and 40 mg of furosemide daily) , tailored to patient tolerance. The study was conducted following the Declaration of Helsinki, approved by the Biomedical Research Ethics Committee of Shandong Provincial Hospital. Follow-up The primary endpoint of this study was liver-related death. Secondary endpoints included the recurrence of ascites, hemorrhage, development of overt hepatic encephalopathy (OHE), and shunt dysfunction. Liver-related death in our study included those due to liver failure, variceal bleeding, and other complications of cirrhosis. Recurrence of ascites was defined as the exacerbation of ascites which could not be improved by diuretics after TIPS, the reappearance of ascites after TIPS, or the requirement of LVP after TIPS. Reduction of ascites was defined as the presence of ascites which was less than that before TIPS with or without diuretics and did not require paracentesis . OHE was defined as hepatic encephalopathy grades ≥ 2 following the West Haven-modified criteria . Shunt dysfunction was suspected at Doppler ultrasonography or CT if the stent was blocked or flow velocity was less than 60 cm/sec or if absence of blood flow or reversal of intrahepatic portal flow occurred . The follow-up period was defined as the duration from admission to death, liver transplantation, the final visit, or study termination. Comprehensive physical examinations, biochemical and hematologic tests, and abdominal ultrasound were conducted. Statistical analyses Quantitative variables were presented as medians (quartiles) and compared using the Mann–Whitney U test for non-normal distribution. Qualitative variables were reported as frequencies (percentages) and compared using the Chi-square test. The density plot and receiver-operating characteristic (ROC) curve were utilized to investigate potential post-TIPS PPG thresholds. Characteristics with p value < 0.05 in univariable Cox regression analysis were included in multivariable analysis, and hazard ratios (HRs) with 95% confidence intervals (95% CIs) were reported. Interactive analysis was performed, an interactive p value > 0.05 indicates no interaction. The cumulative incidences of outcomes were compared using the Kaplan–Meier method and log-rank test. The competing risk analysis (Fine–Gray test) were also performed, and death or liver transplantation was considered as competing risks for other outcomes, while liver transplantation and non-liver death were considered as competing risks for liver-related death. A propensity score matching (PSM) was performed on the subgroups with different post-TIPS PPG thresholds due to large baseline differences, the nearest-neighbor matching method with a 0.2 caliper was used to construct the control group. The standard mean difference (SMD) < 0.2 indicated a small difference between groups. All analyses were performed using R v.4.1.0 ( http://www.R-project.org/ ) with the survival , cmprsk , and MatchIt packages. All results with a two-sided p value of < 0.05 were considered statistically significant. This retrospective study included patients who underwent TIPS for recurrent or refractory ascites at four centers between January 2015 and December 2022. All patients received etiological treatment with continued antiviral therapy, alcohol abstinence, or ursodeoxycholic acid depending on the cause of cirrhosis, and receive regular diuretics (spironolactone and furosemide) for ascites, with additional therapeutic paracentesis as needed. Each center had a gastroenterology and hepatology unit with experienced clinicians performing TIPS procedures. The TIPS procedure was carried out using a standard method as previously described under local anesthesia . Three different diameters of covered stents (6, 8, and 10 mm) were used, the selection of stent diameter is mainly based on the operator's experience and patient’s characteristics of pre-TIPS PPG, liver function, and the availability of stents. PPG was obtained preoperatively (pre-TIPS PPG) and immediately after stent placement (post-TIPS PPG). PPG was measured as the difference between the pressure measured in the portal vein and inferior vena cava. Inclusion criteria were as follows: (1) a diagnosis of cirrhosis (based on clinical signs, laboratory and imaging tests, or liver biopsy) and (2) TIPS procedure for ascites. The exclusion criteria were as follows: (1) age < 14 or > 75 years; (2) not recurrent or refractory ascites; (3) Child–Pugh score > 13; (4) TIPS with uncovered stents, unrelieved bile duct obstruction, hepatocellular carcinoma, or another advanced tumor; (5) no treatment with adequate doses of diuretics; (6) previous TIPS implantation; (7) portocaval surgery or splenectomy; and (8) recurrence of HE without an identifiable trigger (Supplementary Fig. 1). Recurrent ascites were defined as ≥ 3 LVP within 1 year , and refractory ascites were defined per the Asia–Pacific Association for the Study of the Liver as ‘ascites that cannot be mobilized or whose early recurrence cannot be satisfactorily prevented by medical therapy’. All patients received diuretic treatment (at least 100 mg of spironolactone and 40 mg of furosemide daily) , tailored to patient tolerance. The study was conducted following the Declaration of Helsinki, approved by the Biomedical Research Ethics Committee of Shandong Provincial Hospital. The primary endpoint of this study was liver-related death. Secondary endpoints included the recurrence of ascites, hemorrhage, development of overt hepatic encephalopathy (OHE), and shunt dysfunction. Liver-related death in our study included those due to liver failure, variceal bleeding, and other complications of cirrhosis. Recurrence of ascites was defined as the exacerbation of ascites which could not be improved by diuretics after TIPS, the reappearance of ascites after TIPS, or the requirement of LVP after TIPS. Reduction of ascites was defined as the presence of ascites which was less than that before TIPS with or without diuretics and did not require paracentesis . OHE was defined as hepatic encephalopathy grades ≥ 2 following the West Haven-modified criteria . Shunt dysfunction was suspected at Doppler ultrasonography or CT if the stent was blocked or flow velocity was less than 60 cm/sec or if absence of blood flow or reversal of intrahepatic portal flow occurred . The follow-up period was defined as the duration from admission to death, liver transplantation, the final visit, or study termination. Comprehensive physical examinations, biochemical and hematologic tests, and abdominal ultrasound were conducted. Quantitative variables were presented as medians (quartiles) and compared using the Mann–Whitney U test for non-normal distribution. Qualitative variables were reported as frequencies (percentages) and compared using the Chi-square test. The density plot and receiver-operating characteristic (ROC) curve were utilized to investigate potential post-TIPS PPG thresholds. Characteristics with p value < 0.05 in univariable Cox regression analysis were included in multivariable analysis, and hazard ratios (HRs) with 95% confidence intervals (95% CIs) were reported. Interactive analysis was performed, an interactive p value > 0.05 indicates no interaction. The cumulative incidences of outcomes were compared using the Kaplan–Meier method and log-rank test. The competing risk analysis (Fine–Gray test) were also performed, and death or liver transplantation was considered as competing risks for other outcomes, while liver transplantation and non-liver death were considered as competing risks for liver-related death. A propensity score matching (PSM) was performed on the subgroups with different post-TIPS PPG thresholds due to large baseline differences, the nearest-neighbor matching method with a 0.2 caliper was used to construct the control group. The standard mean difference (SMD) < 0.2 indicated a small difference between groups. All analyses were performed using R v.4.1.0 ( http://www.R-project.org/ ) with the survival , cmprsk , and MatchIt packages. All results with a two-sided p value of < 0.05 were considered statistically significant. Patient characteristics and clinical outcomes The study included 276 patients with recurrent or refractory ascites treated with TIPS. The median age of the patients was 57 years, with 173 (62.7%) being male and 103 (37.3%) being female. One hundred forty patients had hepatitis B virus and received continued antiviral therapy. Hepatitis B virus DNA was detected in 49 (35.0%) patients. A total of 246 patients had varices, among whom 155 patients had a history of variceal bleeding. All of these patients received non-selective beta-blockers or endoscopic therapy according to the guideline for portal hypertension . Partial portal vein thrombosis was diagnosed in 107 patients before TIPS placement, and they all received anticoagulation therapy according to the treatment guideline . All patients were administered furosemide (20–120 mg/d) and spironolactone (40–160 mg/d) before TIPS. The median duration of diuretic use was 24.0 (8.0, 48.0) months. Additionally, all patients underwent at least one LVP procedure, and 162 patients underwent more than three LVP procedures within 12 months. During the TIPS procedure, a gastrorenal shunt was found in eight (2.9%) patients and a splenorenal shunt was found in five (1.8%) patients; all of these patients underwent shunt embolisation. The follow-up period was 21.6 (7.5, 41.6) months. The details of patients’ characteristics are shown in Table . During the follow-up period, a total of 151 (54.7%) patients died, with 122 (44.2%) of them experiencing liver-related death. Out of the total number of patients, 73 (26.4%) experienced a recurrence of ascites that was uncontrollable with diuretics, while only 11 (4.0%) underwent LVP. The median time to the recurrence of ascites was 2.9 (1, 14.7) months. Among the 68 patients with reduced ascites, five resumed the use of diuretics. Only 28 (10.2%) patients experienced hemorrhage. Among the 136 patients who experienced OHE after TIPS placement, 25 (18.4%) required hospitalization more than once, and three of these patients underwent stent reduction. Meanwhile, 27 (10.2%) patients experienced shunt dysfunction. The outcomes are summarized in Table . Immediate PPG after TIPS procedure The PPG was decreased from 23 (19, 27) mmHg to 9 (7, 10) mmHg post-TIPS procedure in all patients, indicating a reduction of 62% (53%, 72%). Based on the information from density plots and the ROC curves, the thresholds for lower mortality may be a post-TIPS PPG of > 7, 8, or 9 mmHg, or a decrease rate of < 75%, and the potential thresholds for ascites control may be < 10 or 11 mmHg for post-TIPS PPG, or a 65% decrease in PPG (Fig. A–B). Supplementary Fig. 2–4 displays potential thresholds for other clinical outcomes. The potential thresholds were used as categorical variables for univariable and multivariable regression analyses. The effect of PPG reduction on clinical outcomes The univariable and multivariable analyses of liver-related death revealed that a post-TIPS PPG < 7 mmHg was identified as a risk factor for mortality ( p = 0.003, HR = 1.803, 95% CI 1.220–2.665) (Supplementary Table 1). Considering other secondary endpoints as influencing factors one by one, we found that the recurrence of ascites ( p = 0.033, HR = 1.520, 95% CI 1.003–2.235), hemorrhage ( p = 0.019, HR = 1.911, 95% CI 1.115–3.278), and OHE ( p = 0.009, HR = 1.666, 95% CI 1.136–2.443) were also risk factors for mortality (Supplementary Table 2). Meanwhile, a post-TIPS PPG < 11 mmHg was a protective factor for ascites control ( p = 0.012, HR = 0.524, 95% CI 0.316–0.868) (Supplementary Table 3). No specific PPG threshold was identified as an independent factor for other outcomes (Supplementary Table 4–6). The interactive analysis revealed no significant interaction between the PPG < 11 mmHg and recurrent/refractory ascites ( p = 0.449) or between PPG < 7 mmHg and recurrent/refractory ascites ( p = 0.488, Fig. C). The 7 mmHg as a potential PPG threshold for survival According to the post-TIPS PPG of 7 mmHg, 202 patients had post-TIPS PPG ≥ 7 mmHg and 74 had PPG < 7 mmHg (Table ). Patients with post-TIPS PPG ≥ 7 mmHg had significantly lower mortality than those with PPG < 7 mmHg (51.0% vs 66.6%, p = 0.004, HR = 1.752, 95% CI 1.202–2.555), and the difference persisted after accounting for competing events ( p = 0.015, HR = 1.605, 95% CI 1.098–2.346, Fig. A, B). There was no significant difference between the two groups categorized by thresholds of post-PPG < 7 mmHg in recurrence of ascites (35.7% vs 36.3%, p = 0.729, HR = 1.097, 95% CI 0.650–1.852), hemorrhage rate (16.5% vs 12.7%, p = 0.778, HR = 0.878, 95% CI 0.356–2.169), OHE (55.6% vs 65.7%, p = 0.535, HR = 1.128, 95% CI 0.771–1.652), or shunt dysfunction (21.6% vs. 7.5%, p = 0.138, HR = 0.403, 95% CI 0.121–1.341) (Supplementary Fig. 5). The Fine–Gray tests demonstrated robust results (Supplementary Fig. 6). After 2:1 PSM, there were 117 patients in ≥ 7 mmHg group and 70 patients in < 7 mmHg group (Table ). The significant differences were found in sex, age, aspartate aminotransferase, creatinine, and pre-TIPS PPG between two groups in baseline, and these characteristics were comparable after PSM (Tables and , and Supplementary Fig. 7). The survival curve indicated a difference between the two groups in liver-related death (≥ 7 mmHg vs < 7 mmHg, 55.3% vs 66.7%, p = 0.015, HR = 1.702, 95% CI 1.111–2.607 Fig. C), while no significant difference was observed in other clinical outcomes between two groups (recurrence of ascites: 35.5% vs 37.5%, p = 0.647, HR = 1.143, 95% CI 0.645–2.026; hemorrhage, 13.6% vs 13.1%, p = 0.771, HR = 1.163, 95% CI 0.422–3.206; OHE, 60.6% vs 64.4%, p = 0.941, HR = 1.016, 95% CI 0.664–1.555; shunt dysfunction, 14.8% vs 7.8%, p = 0.481, HR = 0.628, 95% CI 0.172–2.288) (Supplementary Fig. 8). The competing risk analysis indicated robust results of significant difference between two groups in liver-related death ( p = 0.025, HR = 1.633, 95% CI 1.065–2.503 Fig. D), and no difference was observed in other clinical outcomes (Supplementary Fig. 9). The 11 mmHg as a potential PPG threshold for ascites control There were 61 patients with post-TIPS PPG ≥ 11 mmHg and 215 with PPG of < 11 mmHg (Table ). Patients with post-TIPS PPG ≥ 11 mmHg had a significantly higher incidence of recurrence of ascites compared with another group (44.6% vs 33.7%, p = 0.023, HR = 0.560, 95% CI 0.340–0.925), and similar patterns were observed after using competing risk analysis ( p = 0.045, HR = 0.591, 95% CI 0.356–0.981, Fig. A, B). There was no significant difference between the two groups categorized by thresholds of post-PPG < 11 mmHg in liver-related death (52.4% vs 56.0%, p = 0.974, HR = 0.993, 95% CI 0.648–1.523), hemorrhage rate (13.3% vs 16.5%, p = 0.574, HR = 1.320, 95% CI 0.502–3.472), OHE (52.2% vs 58.8%, p = 0.266, HR = 1.270, 95% CI 0.833–1.936), or shunt dysfunction (28.8% vs 15.7%, p = 0.668, HR = 0.828, 95% CI 0.350–1.959), and the competing risk analysis indicated robust results (Supplementary Fig. 10–11). After 1:2 PSM, there were 56 patients with post-TIPS PPG ≥ 11 mmHg and 97 with PPG of < 11 mmHg (Table ). The sex, age, aspartate aminotransferase, gamma-glutamyl transferase, ascites types, and pre-TIPS PPG of patients had significant differences between two groups in baseline, and these characteristics except ascites types were comparable after PSM (Tables and , and Supplementary Fig. 12). The survival curve showed a significant statistical difference in ascites incidence post-TIPS (≥ 11 mmHg vs < 11 mmHg, 46.5% vs 32.3%, p = 0.013, HR = 0.468, 95% CI 0.258–0.851), and the difference was also observed with competing risk analysis ( p = 0.048, HR = 0.549, 95% CI 0.305–0.991, Fig. C, D). However, patients in groups categorized by 11 mmHg of post-TIPS PPG had a similar cumulative incidences of liver-related death (52.3% vs 52.5%, p = 0.602, HR = 0.875, 95% CI 0.529–1.446), hemorrhage (14.2% vs 16.9%, p = 0.827, HR = 1.126, 95% CI 0.390–3.245), OHE (55.2% vs 57.4%, p = 0.274, HR = 1.302, 95% CI 0.811–2.089), and shunt dysfunction (31.6% vs 15.0%, p = 0.355, HR = 0.627, 95% CI 0.233–1.688), and the Fine–Gray test indicated robust results (Supplementary Fig. 13–14). Clinical benefit for PPG7-11 patients We compared the clinical outcomes of three groups of patients based on post-TIPS PPGs of < 7 mmHg, 7–11 mmHg, and ≥ 11 mmHg. The results indicated that patients with a PPG < 7 mmHg had a significantly higher liver-related mortality rate than that of the other two groups (< 7 mmHg vs 7–11 mmHg: 66.6% vs 50.5%, p = 0.002, HR = 0.531, 95% CI 0.353–0.798; < 7 mmHg vs ≥ 11 mmHg: 66.6% vs 52.4%, p = 0.107, HR = 0.670, 95% CI 0.412–1.091). However, patients with a PPG ≥ 11 mmHg had a higher rate of ascites recurrence than that of the other two groups (< 7 mmHg vs ≥ 11 mmHg: 36.3% vs 44.6%, p = 0.242, HR = 1.443, 95% CI 0.781–2.667; 7–11 mmHg vs ≥ 11 mmHg: 32.2% vs 44.6%, p = 0.014, HR = 1.983, 95% CI 1.150–3.421). Conversely, patients with PPGs within the range of 7–11 mmHg had lower mortality and ascites recurrence rates (Supplementary Fig. 15 and 16). The results of the interaction analysis showed no significant interaction between different medical centers and the PPG threshold of < 7 mmHg in terms of survival ( p = 0.384). Furthermore, no significant interaction between different medical centers and the PPG threshold of ≥ 11 mmHg in terms of ascites control ( p = 0.319) was observed. Therefore, the results of the thresholds were not influenced by a center effect. The study included 276 patients with recurrent or refractory ascites treated with TIPS. The median age of the patients was 57 years, with 173 (62.7%) being male and 103 (37.3%) being female. One hundred forty patients had hepatitis B virus and received continued antiviral therapy. Hepatitis B virus DNA was detected in 49 (35.0%) patients. A total of 246 patients had varices, among whom 155 patients had a history of variceal bleeding. All of these patients received non-selective beta-blockers or endoscopic therapy according to the guideline for portal hypertension . Partial portal vein thrombosis was diagnosed in 107 patients before TIPS placement, and they all received anticoagulation therapy according to the treatment guideline . All patients were administered furosemide (20–120 mg/d) and spironolactone (40–160 mg/d) before TIPS. The median duration of diuretic use was 24.0 (8.0, 48.0) months. Additionally, all patients underwent at least one LVP procedure, and 162 patients underwent more than three LVP procedures within 12 months. During the TIPS procedure, a gastrorenal shunt was found in eight (2.9%) patients and a splenorenal shunt was found in five (1.8%) patients; all of these patients underwent shunt embolisation. The follow-up period was 21.6 (7.5, 41.6) months. The details of patients’ characteristics are shown in Table . During the follow-up period, a total of 151 (54.7%) patients died, with 122 (44.2%) of them experiencing liver-related death. Out of the total number of patients, 73 (26.4%) experienced a recurrence of ascites that was uncontrollable with diuretics, while only 11 (4.0%) underwent LVP. The median time to the recurrence of ascites was 2.9 (1, 14.7) months. Among the 68 patients with reduced ascites, five resumed the use of diuretics. Only 28 (10.2%) patients experienced hemorrhage. Among the 136 patients who experienced OHE after TIPS placement, 25 (18.4%) required hospitalization more than once, and three of these patients underwent stent reduction. Meanwhile, 27 (10.2%) patients experienced shunt dysfunction. The outcomes are summarized in Table . The PPG was decreased from 23 (19, 27) mmHg to 9 (7, 10) mmHg post-TIPS procedure in all patients, indicating a reduction of 62% (53%, 72%). Based on the information from density plots and the ROC curves, the thresholds for lower mortality may be a post-TIPS PPG of > 7, 8, or 9 mmHg, or a decrease rate of < 75%, and the potential thresholds for ascites control may be < 10 or 11 mmHg for post-TIPS PPG, or a 65% decrease in PPG (Fig. A–B). Supplementary Fig. 2–4 displays potential thresholds for other clinical outcomes. The potential thresholds were used as categorical variables for univariable and multivariable regression analyses. The univariable and multivariable analyses of liver-related death revealed that a post-TIPS PPG < 7 mmHg was identified as a risk factor for mortality ( p = 0.003, HR = 1.803, 95% CI 1.220–2.665) (Supplementary Table 1). Considering other secondary endpoints as influencing factors one by one, we found that the recurrence of ascites ( p = 0.033, HR = 1.520, 95% CI 1.003–2.235), hemorrhage ( p = 0.019, HR = 1.911, 95% CI 1.115–3.278), and OHE ( p = 0.009, HR = 1.666, 95% CI 1.136–2.443) were also risk factors for mortality (Supplementary Table 2). Meanwhile, a post-TIPS PPG < 11 mmHg was a protective factor for ascites control ( p = 0.012, HR = 0.524, 95% CI 0.316–0.868) (Supplementary Table 3). No specific PPG threshold was identified as an independent factor for other outcomes (Supplementary Table 4–6). The interactive analysis revealed no significant interaction between the PPG < 11 mmHg and recurrent/refractory ascites ( p = 0.449) or between PPG < 7 mmHg and recurrent/refractory ascites ( p = 0.488, Fig. C). According to the post-TIPS PPG of 7 mmHg, 202 patients had post-TIPS PPG ≥ 7 mmHg and 74 had PPG < 7 mmHg (Table ). Patients with post-TIPS PPG ≥ 7 mmHg had significantly lower mortality than those with PPG < 7 mmHg (51.0% vs 66.6%, p = 0.004, HR = 1.752, 95% CI 1.202–2.555), and the difference persisted after accounting for competing events ( p = 0.015, HR = 1.605, 95% CI 1.098–2.346, Fig. A, B). There was no significant difference between the two groups categorized by thresholds of post-PPG < 7 mmHg in recurrence of ascites (35.7% vs 36.3%, p = 0.729, HR = 1.097, 95% CI 0.650–1.852), hemorrhage rate (16.5% vs 12.7%, p = 0.778, HR = 0.878, 95% CI 0.356–2.169), OHE (55.6% vs 65.7%, p = 0.535, HR = 1.128, 95% CI 0.771–1.652), or shunt dysfunction (21.6% vs. 7.5%, p = 0.138, HR = 0.403, 95% CI 0.121–1.341) (Supplementary Fig. 5). The Fine–Gray tests demonstrated robust results (Supplementary Fig. 6). After 2:1 PSM, there were 117 patients in ≥ 7 mmHg group and 70 patients in < 7 mmHg group (Table ). The significant differences were found in sex, age, aspartate aminotransferase, creatinine, and pre-TIPS PPG between two groups in baseline, and these characteristics were comparable after PSM (Tables and , and Supplementary Fig. 7). The survival curve indicated a difference between the two groups in liver-related death (≥ 7 mmHg vs < 7 mmHg, 55.3% vs 66.7%, p = 0.015, HR = 1.702, 95% CI 1.111–2.607 Fig. C), while no significant difference was observed in other clinical outcomes between two groups (recurrence of ascites: 35.5% vs 37.5%, p = 0.647, HR = 1.143, 95% CI 0.645–2.026; hemorrhage, 13.6% vs 13.1%, p = 0.771, HR = 1.163, 95% CI 0.422–3.206; OHE, 60.6% vs 64.4%, p = 0.941, HR = 1.016, 95% CI 0.664–1.555; shunt dysfunction, 14.8% vs 7.8%, p = 0.481, HR = 0.628, 95% CI 0.172–2.288) (Supplementary Fig. 8). The competing risk analysis indicated robust results of significant difference between two groups in liver-related death ( p = 0.025, HR = 1.633, 95% CI 1.065–2.503 Fig. D), and no difference was observed in other clinical outcomes (Supplementary Fig. 9). There were 61 patients with post-TIPS PPG ≥ 11 mmHg and 215 with PPG of < 11 mmHg (Table ). Patients with post-TIPS PPG ≥ 11 mmHg had a significantly higher incidence of recurrence of ascites compared with another group (44.6% vs 33.7%, p = 0.023, HR = 0.560, 95% CI 0.340–0.925), and similar patterns were observed after using competing risk analysis ( p = 0.045, HR = 0.591, 95% CI 0.356–0.981, Fig. A, B). There was no significant difference between the two groups categorized by thresholds of post-PPG < 11 mmHg in liver-related death (52.4% vs 56.0%, p = 0.974, HR = 0.993, 95% CI 0.648–1.523), hemorrhage rate (13.3% vs 16.5%, p = 0.574, HR = 1.320, 95% CI 0.502–3.472), OHE (52.2% vs 58.8%, p = 0.266, HR = 1.270, 95% CI 0.833–1.936), or shunt dysfunction (28.8% vs 15.7%, p = 0.668, HR = 0.828, 95% CI 0.350–1.959), and the competing risk analysis indicated robust results (Supplementary Fig. 10–11). After 1:2 PSM, there were 56 patients with post-TIPS PPG ≥ 11 mmHg and 97 with PPG of < 11 mmHg (Table ). The sex, age, aspartate aminotransferase, gamma-glutamyl transferase, ascites types, and pre-TIPS PPG of patients had significant differences between two groups in baseline, and these characteristics except ascites types were comparable after PSM (Tables and , and Supplementary Fig. 12). The survival curve showed a significant statistical difference in ascites incidence post-TIPS (≥ 11 mmHg vs < 11 mmHg, 46.5% vs 32.3%, p = 0.013, HR = 0.468, 95% CI 0.258–0.851), and the difference was also observed with competing risk analysis ( p = 0.048, HR = 0.549, 95% CI 0.305–0.991, Fig. C, D). However, patients in groups categorized by 11 mmHg of post-TIPS PPG had a similar cumulative incidences of liver-related death (52.3% vs 52.5%, p = 0.602, HR = 0.875, 95% CI 0.529–1.446), hemorrhage (14.2% vs 16.9%, p = 0.827, HR = 1.126, 95% CI 0.390–3.245), OHE (55.2% vs 57.4%, p = 0.274, HR = 1.302, 95% CI 0.811–2.089), and shunt dysfunction (31.6% vs 15.0%, p = 0.355, HR = 0.627, 95% CI 0.233–1.688), and the Fine–Gray test indicated robust results (Supplementary Fig. 13–14). We compared the clinical outcomes of three groups of patients based on post-TIPS PPGs of < 7 mmHg, 7–11 mmHg, and ≥ 11 mmHg. The results indicated that patients with a PPG < 7 mmHg had a significantly higher liver-related mortality rate than that of the other two groups (< 7 mmHg vs 7–11 mmHg: 66.6% vs 50.5%, p = 0.002, HR = 0.531, 95% CI 0.353–0.798; < 7 mmHg vs ≥ 11 mmHg: 66.6% vs 52.4%, p = 0.107, HR = 0.670, 95% CI 0.412–1.091). However, patients with a PPG ≥ 11 mmHg had a higher rate of ascites recurrence than that of the other two groups (< 7 mmHg vs ≥ 11 mmHg: 36.3% vs 44.6%, p = 0.242, HR = 1.443, 95% CI 0.781–2.667; 7–11 mmHg vs ≥ 11 mmHg: 32.2% vs 44.6%, p = 0.014, HR = 1.983, 95% CI 1.150–3.421). Conversely, patients with PPGs within the range of 7–11 mmHg had lower mortality and ascites recurrence rates (Supplementary Fig. 15 and 16). The results of the interaction analysis showed no significant interaction between different medical centers and the PPG threshold of < 7 mmHg in terms of survival ( p = 0.384). Furthermore, no significant interaction between different medical centers and the PPG threshold of ≥ 11 mmHg in terms of ascites control ( p = 0.319) was observed. Therefore, the results of the thresholds were not influenced by a center effect. Covered TIPS was recommended for patients with recurrent or refractory ascites ; however, the hemodynamic target of post-TIPS PPG was established with the use of bare stents; therefore, the optimal post-TIPS PPG thresholds in patients with ascites need to be examined in covered stents. However, we observed a lack of dedicated studies that specifically examined the relationship between the PPG after covered stent placement and clinical outcomes, and the standard for post-TIPS PPG in patients with ascites is not clear . Logically, greater PPG reduction or lower PPG value after TIPS insertion can achieve more adequate portal pressure reduction, and decrease the risk of ascites recurrence, but are associated with a greater risk of HE. Our study, which highlighted the potential existence of an optimal post-TIPS PPG range for patients with recurrent and refractory ascites, found that patients may experience improved survival and ascites control with a post-TIPS PPG of 7–11 mmHg. The post-TIPS PPG in our study was 9 [7, 10] mmHg, while it was observed to be 6.4 ± 4.2 mmHg in the study by Bureau et al. , which included only patients with covered stents. The post-TIPS PPG was higher in our study, and the potential reason for the difference may be that only 10 mm-diameter stents were used in their study, while most of our patients had 8 mm-diameter stents (95.3%). Although the lower PPG was able to reduce the recurrence of ascites, too low PPG will lead to reduced benefit for patients. Several studies have indicated that an excessively low post-TIPS PPG is a risk factor for survival, with the potential threshold possibly being 5 or 8 mmHg . In line with these findings, we observed that patients with post-TIPS PPG < 7 mmHg had significantly higher liver-related mortality than other patients in our study. This threshold may indicate a point at which the risks of aggressive intervention outweigh the benefits of reducing ascites. The impact of low PPG on survival has been previously described and can be attributed to the reduced functional hepatic reserve and increased burden of comorbidities . Meanwhile, the lower PPG generally indicates more shunt of TIPS, which might reduce liver perfusion and lead to impaired liver function in these patients . Therefore, the lower PPG is not always better. The PPG should be controlled in a range that does not increase other complications while reducing the recurrence of ascites, and the lowest PPG threshold in this study was 7 mmHg. According to the multivariable analysis, the recurrence of ascites was a risk factor of liver-related death in our study. Similarly, the study by Queck et al. observed an appropriate post-TIPS PPG of patients. The difference is that Queck et al. mainly studied patients with bare stents, whereas all of our patients used covered stents. Uncovered stents predictably tend to decrease in diameter over time, as a result, the decrease in PPG achieved post-TIPS is not sustained but progressively lost until reintervention . In contrast, TIPS using covered stents maintains the pressure drop during follow-up , supporting a longer observation period; therefore, we were able to obtain clinical outcomes with a longer follow-up period of up to 5 years in our study. Previous studies also have shown that most patients experienced resolution of ascites starting from 6 months after TIPS ; therefore, it is more valuable to observe the long-term prognosis of ascites . In addition, our research results also indicated that the recurrence of ascites after TIPS could affect the survival of patients. This suggests that the recurrence of ascites can serve as a predictive indicator for the prognosis of patients after TIPS. There were only 1.8% patients who underwent liver transplantation in our cohort, the limited availability of liver donors for transplantation, coupled with the exorbitant costs involved, might be the primary reasons behind the low number. The cumulative incidence of OHE was 44.6% within 1 year after TIPS in our study, and the multivariable analysis did not identify post-TIPS PPG as an influencing factor for OHE, which is consistent with findings shown in other studies . The use of small-diameter covered stents (8 mm) may have reduced the incidence of OHE after TIPS , which may be the reason why the PPG threshold for OHE was not found. This study has some limitations. First, the retrospective design of our analyses may have introduced some bias. However, we included consecutive patients to minimize excessive bias. Second, the majority of patients in this study received 8-mm stents. Although our results indicated that the stent diameter does not affect the prognosis, further confirmation among populations with different stent diameters is required. Third, we could only collect immediate post-TIPS PPGs retrospectively, which may have varied over time. Further trials are needed to confirm this potential range with long-term PPG after TIPS. Finally, to obtain ample patient data, those with a history of bleeding were not excluded. However, PSM was performed to minimise the influence of bleeding history on the results of the analysis. In summary, our study provides compelling evidence for the existence of an optimal post-TIPS PPG range for patients with recurrent and refractory ascites. Patients with a post-TIPS PPG of 7–11 mmHg demonstrated significantly improved survival and ascites control. Future prospective, multicentre studies are warranted to refine the recommendations for personalized post-TIPS PPG management. |
Key Stages in the Development and Establishment of Paediatric Endocrinology: A Template for Future Progress | 9bdb84ab-efbc-4726-91e1-8b1411f2c6b6 | 10836736 | Physiology[mh] | Modern paediatric care depends on the existence of many sub-speciality divisions, such as cardiology, neurology, neonatology, and gastroenterology, which have developed over the past 50 years to provide a more sophisticated insight into organ- or system-based diagnosis, clinical management, and specialist training. In Western Europe, general paediatrics is a recognised training pathway; however, many paediatricians in training also follow sub-specialities of their choice, aligned with their chosen clinical and research interests. In the case of paediatric endocrinology, a small number of specialist consultant/faculty posts created in the 1970s have grown to represent the speciality within the majority of university centres and most government-funded hospitals, where one or more named consultants are designated to see new referrals and assume clinical responsibility for patients with endocrine disorders. The aim of this review was to identify the key stages and guiding principles in the development of this sub-speciality. Paediatric endocrinology is a thriving field in most Western countries, but unfortunately, several countries still lack this fundamental speciality. We will suggest a template for the staged development of the speciality in those countries, for example, in Central Europe, Africa, and Asia, seeking to emulate this model. Paediatric endocrine disorders embrace multiple pathological endocrine mechanisms. Classical endocrinology involves disorders, namely, deficiency or excess of or resistance to hormone secretion by the endocrine glands. In terms of patterns of clinical referral, problems of growth and puberty predominate. These are followed by thyroid and adrenal disorders together with abnormalities of bone and calcium metabolism. Diabetes care can essentially be regarded as requiring its own personnel and facilities but is an integral component of endocrinology. In addition to the above disorders, obesity is now becoming an important challenge. There is a major drive to set up obesity clinics or complications of excess weight clinics for more severe and complex patients. Due to the breadth of pathology and referrals, it is likely that only a few centres in each country will have expertise in all areas such as growth, puberty, adrenal, bone health, and complex obesity. The need for expert care requires the sharing of particular experience between centres and the establishment of national commissioned services with multi-disciplinary teams. Before the emergence of paediatric endocrinology as a recognised speciality, children with hormonal disorders were generally under the care of adult physicians. Adult endocrinology was established considerably earlier than paediatric endocrinology. It was the formation of organisations such as European Society for Paediatric Endocrinology (ESPE) in Europe and the Lawson Wilkins Pediatric Endocrine Society in the US that focused on the importance of paediatricians becoming sufficiently trained to adopt these responsibilities. In some countries, notably in Eastern and Central Europe, care of a child with an endocrine disorder still remains predominantly under the care of adult physicians. This situation is more likely in countries where there is a lack of paediatric sub-speciality development. Close working relationships are extremely beneficial to patient care, and clinician collaboration can be very positive in the care of rare disorders such as paediatric Cushing’s syndrome, which has been personally experienced in our institution . It is therefore extremely important to avoid a schism or competition between paediatric and adult physicians when developing paediatric endocrine services. Such positive relationships are also needed for the development of cross-speciality expertise to optimise outcomes, for example, between paediatric endocrinology, paediatric urology, and psychology for children with differences in sex development. The practice of clinical endocrinology depends on the existence of laboratory support for determination of hormone concentrations which are interpreted to diagnose excess, normal, or insufficient endocrine function. Hormone determination is necessary if effective replacement therapy is to be used. Endocrine reference laboratories are established in most tertiary referral units and operate with a validated system of quality control of hormone assays. Determination of key hormones such as GH, insulin, T4, TSH, cortisol, oestradiol, testosterone, LH, and FSH is widely available, whereas assays for more niche hormones such as IGF-I vary considerably between laboratories . It is essential that assay-specific reference intervals for different paediatric age groups are available for determination of hormones such as IGF-I. Fundamental measures such as screening for congenital hypothyroidism must be a priority. The arrival of molecular investigations to determine possible genetic variants has made an enormous contribution to the diagnosis of rare endocrine disorders . For example, in the field of growth disorders, numerous genetic variants have been identified in the pathway of GH action with a relatively high rate of positive variants identified in selected populations . Apart from genetic analysis for potentially life-threatening familial disorders such as multiple endocrine neoplasia which is established in named reference laboratories , next-generation genetic sequencing remains predominantly a grant-dependent research activity. Diagnostic genetic panels are now available for a range of disorders in some countries. The presence of certain “red flags” which point to increased likelihood of a genetic pathogenesis is now recognised in fields such as growth, puberty, thyroid, and adrenal disorders, in addition to cases of hyperinsulinism . The ESPE: A Truly European Initiative Specialisation is a phenomenon which pervades all walks of life and professional organisations. Expertise is at a premium when it comes to resolving problems and moving forward. However, the acquisition of valid expertise is a challenging process dependent on well-developed training pathways. An ethos of sharing experience and finding common intellectual ground would have diffused through the atmosphere of a meeting of thirty-two paediatricians and endocrinologists, from European countries and Israel, convened by Professor Andrea Prader at the Kinderspital, Zurich, in July 1962. All attendees were paediatricians with responsibilities in the new field of endocrine disorders, and Prader’s aim was to create a group of specialists with a common interest to discuss science and clinical problems. The meeting contained a scientific agenda and it was agreed that the participants of this unofficial “Paediatric Endocrinology Club” should meet once a year, always in a different European country. The “Club” soon blossomed into a scientific society, and the ESPE was born . A similar society was founded in the USA in 1972 carrying the name of Dr. Lawson Wilkins. ESPE became the European cradle of paediatric endocrinology and a reference point for clinicians seeking contacts for clinical advice and training experience. Broadly speaking, ESPE has developed four main roles. Firstly, it is an educational forum, providing opportunities for presentation of new data and for reviews of clinical experience across the speciality and developing international schools in paediatric endocrinological disorders. Secondly, fellowships, largely sponsored by pharmaceutical companies, were developed, allowing trainees to travel abroad and spend time in host institutions for research, mentorship, and experience of clinical practice in a culture and environment different from their own. Thirdly, ESPE stimulated research projects which are largely international and collaborative. Fourthly, it created its own welcoming atmosphere for delegates, furthering collegiate exchange, which generated contacts, collaboration, discussion, and friendship among its members. Above all, ESPE is a point of reference for a geographically diverse group of both established and aspiring paediatric endocrinologists and allied professionals. The Need to Gain Experience Outside the Home Institution: Flag Bearers for the Speciality Any scientific discipline will advance more rapidly in institutions with a critical mass of brain power linked to research structure and facilities and funding for new and imaginative projects. In the 1960s and 1970s, paediatric endocrinology soon established its centres of excellence. Sponsored research fellowships hardly existed, and young, ambitious European endocrinologists made personal arrangements to spend time for research and clinical training in established centres outside their home country. These physicians became the flag bearers who returned home having learnt new techniques of research, the fundamental principles of the organisation of a clinical paediatric endocrinology service, and frequently a more international language. Such an experience remains equally valid today. Most European countries can attribute the development of paediatric endocrinology to a handful of physicians, who gained experience outside their own centre of excellence and became the leaders of the next generation. Such a model of learning skills outside one’s home institution and bringing expertise home remains of crucial importance today. Acquired skills include research and clinical abilities, together with overall organisational expertise. Countries that have not sent trainees abroad will by definition lag behind in learning new approaches to diagnostic assessment and indications for new therapeutic advances. This is a key training aspect for developing a framework of progress for the speciality with many successful examples throughout Europe and other Western European countries, South America, Japan, Southeast Asia, and Australasia. Training Fellowships The principles of sending trainees abroad to learn and bring expertise home can be complex and need to be thought through and addressed. Pharma has played a key role in sponsoring short- and long-term fellowships from home to host countries and institutions. ESPE and national scientific societies also play a key role in supporting such fellowships, which, by creating links and bonds through the visiting fellow, can foster successful ongoing collaborations in terms of research activities, friendship, and personal ties. A working relationship between the two centres, designed to last longer than the duration of the fellowship, can be of mutual benefit in terms of continued training, e-learning, and exchange visits. Despite the many advantages of such arrangements, there may be challenges when the trainee returns from a successful fellowship. The experience gained may engender conflict with senior personnel or with other trainees at the home institution who have not had the opportunity to travel. It is helpful to consider these potential issues in advance including strategies to ensure the development of the trainee’s subsequent career path in their home country and to maximise the benefit to the institution from the successful fellowship. In particular, strategies that recognise and promote future leaders can be vital in the building of successful paediatric endocrinology departments. Educational Initiatives ESPE has become a powerful force for education. In addition to its annual meetings, it organises international schools in postgraduate paediatric endocrinology, diabetes, obesity, and metabolism courses and also faculty teaching to regions such as Eastern Europe, the Caucasus, the Maghreb, Arab countries, Central Asia, and countries such as Kenya and Nigeria. The new sub-Saharan African initiative Programme d’enseignement en Endocrinologie-Diabétologie pédiatrique pour l’Afrique Francophone is a further example. Evidence from ESPE “Winter School” indicates that a high proportion of its previous trainees go on to develop a career with paediatric endocrinology as a major interest . ESPE-driven teaching in Africa is also embraced by the Paediatric Endocrine Training Centres for Africa initiative. Full details are available on the ESPE website, https://eurospe.org . Distance learning is delivered through the e-learning global website www.espe-elearning.org . Although the contribution of these courses to local expertise has been very successful, on-site education from “imported” experts is also critical to cement the development of the speciality in resource-poor countries, such as these. Specialisation is a phenomenon which pervades all walks of life and professional organisations. Expertise is at a premium when it comes to resolving problems and moving forward. However, the acquisition of valid expertise is a challenging process dependent on well-developed training pathways. An ethos of sharing experience and finding common intellectual ground would have diffused through the atmosphere of a meeting of thirty-two paediatricians and endocrinologists, from European countries and Israel, convened by Professor Andrea Prader at the Kinderspital, Zurich, in July 1962. All attendees were paediatricians with responsibilities in the new field of endocrine disorders, and Prader’s aim was to create a group of specialists with a common interest to discuss science and clinical problems. The meeting contained a scientific agenda and it was agreed that the participants of this unofficial “Paediatric Endocrinology Club” should meet once a year, always in a different European country. The “Club” soon blossomed into a scientific society, and the ESPE was born . A similar society was founded in the USA in 1972 carrying the name of Dr. Lawson Wilkins. ESPE became the European cradle of paediatric endocrinology and a reference point for clinicians seeking contacts for clinical advice and training experience. Broadly speaking, ESPE has developed four main roles. Firstly, it is an educational forum, providing opportunities for presentation of new data and for reviews of clinical experience across the speciality and developing international schools in paediatric endocrinological disorders. Secondly, fellowships, largely sponsored by pharmaceutical companies, were developed, allowing trainees to travel abroad and spend time in host institutions for research, mentorship, and experience of clinical practice in a culture and environment different from their own. Thirdly, ESPE stimulated research projects which are largely international and collaborative. Fourthly, it created its own welcoming atmosphere for delegates, furthering collegiate exchange, which generated contacts, collaboration, discussion, and friendship among its members. Above all, ESPE is a point of reference for a geographically diverse group of both established and aspiring paediatric endocrinologists and allied professionals. Any scientific discipline will advance more rapidly in institutions with a critical mass of brain power linked to research structure and facilities and funding for new and imaginative projects. In the 1960s and 1970s, paediatric endocrinology soon established its centres of excellence. Sponsored research fellowships hardly existed, and young, ambitious European endocrinologists made personal arrangements to spend time for research and clinical training in established centres outside their home country. These physicians became the flag bearers who returned home having learnt new techniques of research, the fundamental principles of the organisation of a clinical paediatric endocrinology service, and frequently a more international language. Such an experience remains equally valid today. Most European countries can attribute the development of paediatric endocrinology to a handful of physicians, who gained experience outside their own centre of excellence and became the leaders of the next generation. Such a model of learning skills outside one’s home institution and bringing expertise home remains of crucial importance today. Acquired skills include research and clinical abilities, together with overall organisational expertise. Countries that have not sent trainees abroad will by definition lag behind in learning new approaches to diagnostic assessment and indications for new therapeutic advances. This is a key training aspect for developing a framework of progress for the speciality with many successful examples throughout Europe and other Western European countries, South America, Japan, Southeast Asia, and Australasia. The principles of sending trainees abroad to learn and bring expertise home can be complex and need to be thought through and addressed. Pharma has played a key role in sponsoring short- and long-term fellowships from home to host countries and institutions. ESPE and national scientific societies also play a key role in supporting such fellowships, which, by creating links and bonds through the visiting fellow, can foster successful ongoing collaborations in terms of research activities, friendship, and personal ties. A working relationship between the two centres, designed to last longer than the duration of the fellowship, can be of mutual benefit in terms of continued training, e-learning, and exchange visits. Despite the many advantages of such arrangements, there may be challenges when the trainee returns from a successful fellowship. The experience gained may engender conflict with senior personnel or with other trainees at the home institution who have not had the opportunity to travel. It is helpful to consider these potential issues in advance including strategies to ensure the development of the trainee’s subsequent career path in their home country and to maximise the benefit to the institution from the successful fellowship. In particular, strategies that recognise and promote future leaders can be vital in the building of successful paediatric endocrinology departments. ESPE has become a powerful force for education. In addition to its annual meetings, it organises international schools in postgraduate paediatric endocrinology, diabetes, obesity, and metabolism courses and also faculty teaching to regions such as Eastern Europe, the Caucasus, the Maghreb, Arab countries, Central Asia, and countries such as Kenya and Nigeria. The new sub-Saharan African initiative Programme d’enseignement en Endocrinologie-Diabétologie pédiatrique pour l’Afrique Francophone is a further example. Evidence from ESPE “Winter School” indicates that a high proportion of its previous trainees go on to develop a career with paediatric endocrinology as a major interest . ESPE-driven teaching in Africa is also embraced by the Paediatric Endocrine Training Centres for Africa initiative. Full details are available on the ESPE website, https://eurospe.org . Distance learning is delivered through the e-learning global website www.espe-elearning.org . Although the contribution of these courses to local expertise has been very successful, on-site education from “imported” experts is also critical to cement the development of the speciality in resource-poor countries, such as these. A discussion about the establishment of a paediatric sub-speciality in a resource-poor country should include mention of the telemedicine model of store-and-forward consultations as a training device. This was proposed by Collegium Telemedicus ( www.collegiumtelemedicus.org ) and consists of a consultation platform that is available on the web, offering the opportunity to confidentially discuss clinical cases . As opposed to real-time consultation, so-called store-and-forward consultations imply that the referring physician creates a consultation request that is stored on the digital platform until forwarded to a consultant. Communication without the need for a scheduled appointment avoids real-time video conferencing and its technical disadvantages and allows the practitioner to return to the consultation document at will, which facilitates follow-up communication. Store-and-forward consultations were a key component of an education initiative launched in Haiti in 2015. The internal relations council of the PES introduced a comprehensive training programme in association with ESPE to develop the Pediatric Endocrinology Education Program for Haiti ( www.peephaiti.org ) with the aim of establishing paediatric endocrinology as a speciality in Haiti. Paediatric endocrinology consultation services were established using the store-and-forward online platform with positive participation by referring physicians, with appropriate advice given by consultant paediatric endocrinologists, mostly from the USA, in 100% of cases. The diagnosis was clarified in 88% and management improved in 77% of cases. The store-and-forward system of consultations is therefore an alternative to face-to-face teacher visits and provides clinical and educational benefit to local medical staff . How do we define a centre of medical excellence? This well-used term can have numerous dimensions. Firstly, excellence is underpinned by quality medical, nursing, and technical personnel who are able to deliver state-of-the-art care in a caring, compassionate, and efficient environment. Secondly, facilities for investigation, diagnosis, and treatment of complex cases follow, with development of other paediatric sub-specialities. Thirdly, a track record of research activities is leading to publication of key data in high-impact peer-reviewed journals. Fourthly, delivery of training in both medical and nursing care is paramount. The development of such an institution might appear to be out of the reach of resource-poor countries, yet the benefits to the local, regional, and national population can be immeasurable. In the UK, there has been agreement between paediatricians and the government about which institutions are designated tertiary centres. Tertiary paediatric endocrine services are usually co-located with other tertiary specialities (e.g., neurosurgery, PICU, and paediatric urology), and specialist multi-disciplinary teams are developed to enable clinical decisions related to detailed investigations or procedures. Where a child is managed depends on the complexity of the endocrine condition and the need for other specialist services. In the UK, there is now a settled view for which endocrine conditions are managed at the tertiary centre, which conditions can be managed in an outreach clinic, sharing care with the local team, and which endocrine conditions do not require input from a tertiary centre. A pattern of regular, perhaps every 3–6 months, visits to these “centres” or “outreach” clinics by the tertiary specialist has been developed. The advantages of this approach are that patients are managed as close to home where possible (thus reducing the need for families to travel), health resources are used effectively at each centre, and clinical experience from the tertiary centre is devolved. Complex cases still need to be referred to and stay under the care of tertiary centres, where more experience exists. There is also a requirement for a very few quaternary centres to care for rare life-threatening disorders such as congenital hyperinsulinism. National societies in paediatric endocrinology can play an important role in establishing how care should be delivered and commissioned for children as well as advising the government on referral pathways for different endocrine conditions. A positive outcome of this model of care is the development of regional networks composed of network centres (non-tertiary district general hospitals) and a tertiary centre covering a population of 1–5 million . Close communication established between all clinicians in the network is beneficial to medical care, particularly in emergencies, but can also facilitate research collaboration. A notable example of the power of a regional network to generate outstanding research was demonstrated in the field of childhood obesity by the University Children’s Hospital (Universitätskinderklinik) in Leipzig, where regional liaison has facilitated analysis of key population data, offering predictions of obesity with important public health consequences . An international syllabus for training in paediatric endocrinology has recently been developed and was co-designed by ESPE . The syllabus includes the requirements for trainers, training centres, and the facilities that should be available to trainees. In the UK, the requirements for future appointments at consultant/faculty level in each region of the country are overseen by an administrative resource such as the regional deanery funded by the Government Department of Health. Linked to these requirements is a system of national GRID appointments which are highly competitive and provide successful applicants with specialist national training numbers. A specialist national training number allows the trainee to access quality-assured training in paediatric endocrinology at a designated tertiary training centre. Completion of the training programme enables the trainee to apply for a substantive consultant post in paediatric endocrinology. This system provides dual accreditation in paediatrics and paediatric endocrinology. Training in paediatric endocrinology follows a mandatory period of 4 years in general paediatric training and then 2–3 years of clinical training in the speciality. In addition to clinical training, trainees are encouraged to undertake a formal period of clinical or laboratory research aimed at the acquisition of a research degree. The roles and responsibilities of nurses across continents such as Europe are highly diverse. They vary from the appointment of highly specialised nursing professors with major clinical responsibilities and ability to run their own outpatient clinics and prescribe medications to nursing personnel performing only general paediatric nursing care. Paediatric endocrinology is a field where trained specialist nurses can play a major role in patient care and become integral and valued members of the clinical team. Responsibilities such as accurate measurement of height and assessment of growth, together with performing dynamic hormonal tests and close patient contact for non-judgemental questioning related to adherence to therapies such as human growth hormone, are examples of effective nursing involvement in clinical endocrine care. A dedicated nursing component would be considered a vital asset for any emerging paediatric endocrinology departments. We are writing this review because of our enthusiasm for quality paediatric endocrine care and not as a criticism of the lack of its development in certain cultural settings. We are fully aware of the challenges that the creation, development, establishment, and durability of a high-quality paediatric sub-speciality pose. The combination of digital and human training techniques can accelerate and secure progress. The template and guiding principles for the creation of a sub-specialty referral centre have been described and are underpinned by the vision, enthusiasm, and ambition of the present and future leaders, together with national funding and community support. Advice and support from a national or regional scientific society, such as ESPE, can also make a major contribution. Determination to succeed will attract like-minded professionals to join the challenge. Good working relationships between colleagues, not only within a single institution, but with other institutions, will result in unexpected benefits and key political support. M.O.S. has had consultancy agreements with Merck Healthcare KGaA, Darmstadt; Pfizer; Sandoz; Visen; GenSci; OPKO; and Springer Healthcare IME and has received honoraria for lectures from Novo Nordisk, Ipsen, and Ascendis. M.D.C.D. has received honoraria for lectures and webinars from Novo Nordisk and Sandoz. J.H.D. has received travel bursaries from Pfizer, Ipsen, Sandoz, and Novo Nordisk; has received honoraria for lectures; and had consultancy agreements with Kyowa Kirin. H.L.S. received a travel bursary from Novo Nordisk, honoraria for lectures/educational meetings from Novo Nordisk, Pfizer, Ipsen, and Sandoz and for educational articles from Pfizer. None of the authors received any funding related to the preparation of this manuscript. M.O.S. conceived the idea of the review and wrote the first draft. M.D.C.D., J.H.D., and H.S.L. made major contributions to the text and reviewed the final version. |
Perspectives of paediatric occupational therapists on the use of evidence-based practice in Kuwait: a qualitative study | 04c8abc6-752b-4629-990b-08a2181b25ae | 11647311 | Pediatrics[mh] | Evidence-based practice (EBP) integrates latest research evidence with clinical experience and patient values. Clinicians implement EBP when they use the best available research evidence along with their clinical experience considering their patients’ needs, values and preferences in clinical decision-making and healthcare delivery. The application and implementation of EBP advances any profession and enhances the quality of services delivered to patients. Its benefits have also been found among therapists in reducing levels of burn-out. Occupational therapists play a crucial role in delivering effective and efficient healthcare services for patients in need. High-quality occupational therapy services require careful consideration and implementation of the best available evidence to enhance EBP. Understanding factors related to EBP implementation is crucial for supporting students, practitioners and educators in the healthcare field. Several surveys have been conducted worldwide to study the factors related to EBP application. Despite the positive attitude towards EBP among occupational therapists, there have been multiple challenges and barriers to its global. Some of the challenges for applying EBP among occupational therapists in Sweden included insufficient research evidence for intervention, lack of support (ie, lack of encouragement at the workplace to use research and not having easy access to guidelines) and time. Additional barriers were found among occupational therapists in Saudi Arabia, including insufficient education, limited resources, training, and funding, and a lack of research skills and knowledge. Krueger et al indicated an association among occupational therapists in the USA between implementing EBP and those with higher education (ie, doctorate), practising self-reflection behaviour, receiving organisational support, having time for EBP activities and access to full-text articles. However, there is no clear focus on the use of EBP in the paediatric population, considering the unique specifications and needs of this population. Few researchers have studied this population, and studies have been limited to specific ages or patient populations. Baig et al studied EBP use among occupational and physical therapists in the USA when providing services for patients with cerebral palsy aged between 0 and 3 years. The authors reported that although the majority of participants highlighted the importance of EBP in making clinical decisions, only a minority used EBP to develop interventions. Furthermore, interventions with a high level of evidence (ie, bimanual therapy and constraint-induced movement therapy) were infrequently used, whereas other treatment methods (ie, neurodevelopmental treatment and sensory regulation therapy) were not used for their recommended purposes. In Kuwait, little research has been conducted on the standards of care provided to patients by healthcare workers. Physicians in primary care and dentists appear to rely more on their judgement regarding their clinical decisions than on EBP. Although physicians had a somewhat positive attitude towards EBP, studies indicated that they had low levels of knowledge of EBP and were unaware of trusted EBP resources. Furthermore, studies have found that these barriers apply to the use of EBP by physical therapists in Kuwait. Regarding the barriers faced, physical therapists were found to face parallel issues highlighted earlier (ie, insufficient time, lack of information resources and inapplicability of research findings) in Kuwait’s population. In Kuwait and the Middle East, the concept of EBP use in children among professional or occupational therapists strikes concern for the lack of rigour in practising effective methods. Therefore, this study aimed to explore factors related to the use of EBP from the perspective of paediatric occupational therapists working in Kuwait. Moreover, this study broadly and qualitatively considered the unique specifications of the paediatric population. Such knowledge gathered both globally and regionally is essential for guiding educators to support the development of students/practitioners’ EBP utilisation abilities and the facilitation of EBP in clinical practice. Study design A phenomenological qualitative study design was adopted using in-depth, face-to-face and semistructured interviews. The qualitative approach helped achieve the study objectives to gain insight into the participants’ perspectives on the studied phenomenon. Participants Purposive sampling method was used. The eligibility criteria were as follows: (1) practising occupational therapists in Kuwait, (2) a minimum of 2 years of clinical experience with paediatrics and (3) the ability to understand and speak English. The participants were recruited from a various government and private hospitals, clinics and schools in Kuwait. Participant recruitment began using social media platforms and networks (ie, shared invitational messages sent to occupational therapists in Kuwait). Occupational therapists interested in the research topic contacted the principal investigator for study details, as their contact details were provided in the invitational message. During initial contact, participants were screened per the inclusion criteria, and interviews were scheduled. On the day of the interview, participants were provided with a written information sheet about the study; and their consent was obtained. Data collection Interviews were conducted at the participants’ workplaces in a private room where only the researcher and participant were present. A trained interviewer (DD) conducted the interview using a guide for the purpose of the study. The interviews lasted 45–60 min. All interviews were conducted in English, and the audio was recorded. Once data saturation was achieved, where no additional insights were obtained from the last two individual interviews, no further participants were recruited. Data analysis The data collected were transcribed verbatim. The data were also subsequently analysed thematically, followed by the guidelines provided by Braun and Clarke. An inductive approach was used in which the transcribed data were coded by three research team members (ZJ, DD and DA) independently who met to discuss and combine their codes into themes and subthemes. Any uncertainties and disagreements between them were carefully scrutinised via discussion. All team members maintained an audit trail and used memoing techniques to enhance the trustworthiness of the data. The study’s findings were also supported through member checking and informants (eg, participants’ direct quotes). In addition, the research team members engaged in ongoing reflexivity and reflection to maintain their influence on the collected data and analysis, given that they were all females. Patient and public involvement None. A phenomenological qualitative study design was adopted using in-depth, face-to-face and semistructured interviews. The qualitative approach helped achieve the study objectives to gain insight into the participants’ perspectives on the studied phenomenon. Purposive sampling method was used. The eligibility criteria were as follows: (1) practising occupational therapists in Kuwait, (2) a minimum of 2 years of clinical experience with paediatrics and (3) the ability to understand and speak English. The participants were recruited from a various government and private hospitals, clinics and schools in Kuwait. Participant recruitment began using social media platforms and networks (ie, shared invitational messages sent to occupational therapists in Kuwait). Occupational therapists interested in the research topic contacted the principal investigator for study details, as their contact details were provided in the invitational message. During initial contact, participants were screened per the inclusion criteria, and interviews were scheduled. On the day of the interview, participants were provided with a written information sheet about the study; and their consent was obtained. Interviews were conducted at the participants’ workplaces in a private room where only the researcher and participant were present. A trained interviewer (DD) conducted the interview using a guide for the purpose of the study. The interviews lasted 45–60 min. All interviews were conducted in English, and the audio was recorded. Once data saturation was achieved, where no additional insights were obtained from the last two individual interviews, no further participants were recruited. The data collected were transcribed verbatim. The data were also subsequently analysed thematically, followed by the guidelines provided by Braun and Clarke. An inductive approach was used in which the transcribed data were coded by three research team members (ZJ, DD and DA) independently who met to discuss and combine their codes into themes and subthemes. Any uncertainties and disagreements between them were carefully scrutinised via discussion. All team members maintained an audit trail and used memoing techniques to enhance the trustworthiness of the data. The study’s findings were also supported through member checking and informants (eg, participants’ direct quotes). In addition, the research team members engaged in ongoing reflexivity and reflection to maintain their influence on the collected data and analysis, given that they were all females. None. The participant pool consisted of 10 paediatric occupational therapists (see ). Of the 10 participants, 4 were male, while 3 had master’s degrees, and the remaining 7 had bachelor’s degrees. The average age of the participants was 35.6 years (range 29–41 years), and their average number of years of experience was 11.5 (range 4–20 years). The findings can be transferred to other contexts because thorough descriptions of the participants are provided, as indicated in . The interview analysis resulted in the emergence of three main themes: (1) sources of motivation, (2) organisational support for EBP use and (3) creativity and flexibility in implementing EBP. As shown in , each theme is presented with subthemes to provide further insight into the research objectives. Theme 1: sources of motivation All participants agreed about the importance of using EBP, clarifying how they used it in practice on every occasion. Participant 6 explained: I use EBP all the time, I don’t put a plan or a strategy for a child without evidence Participants explained a variety of motivational sources that encouraged them to use EBP. These sources are condensed into three main subthemes: (a) personal factors, (b) clients’ outcomes and (c) supportive social environment. Subtheme A: personal factors Participants claimed that being an occupational therapist gave them a sense of responsibility towards their clients. Participant 5 commented: For me, being an OT, I feel that is a huge responsibility job… you are dealing with children, you cannot use trial and error. You need proper evidence, otherwise [if EBP is not used] it will lead to negative results. It will affect the child’s future, so EBP is the best. Participants 3 also mentioned: It is my duty, and I mean, the fear from God. A sense of responsibility can also arise from client and family expectations. The use of EBP satisfies therapists by giving them confidence in their jobs. For example, participant 9 mentioned: I want to be more confident when applying a technique… It [EBP] will give you the guidelines and how to treat and how to start from basically start from the assessment till the intervention and even going to the home program. Subtheme B: client outcomes Several participants indicated client outcomes as motivators for improving practice methods. The participants agreed that being an occupational therapist gave them a passion for helping others. Participant 6 stated: My clients are my motivators to use EBP. To see the outcome in your clients. Thus, outcomes that motivate therapists in their work include progress in client conditions. Participant 8 added: I’m a man who loves when people remember me, I love when I provide service to see people’s satisfaction. It is an indescribable, blissful feeling of contentment. Thus, outcomes are not only measured by clients’ progress; the satisfaction level of the entire family is another factor. Subtheme C: supportive social work environment The third subtheme highlighted how and diverse environments proved to be motivational sources for these therapists. The role of supportive colleagues was evident from the data. All participants highlighted the role of peers applying EBP (ie, formal discussion, monthly in-services, weekly journal clubs/seminars and friendly conversations). Participant 7 highlighted diverse environmental support with: With our colleagues, we discuss things such as conditions. We have group discussions about cases… taking their advice to implement, get their feedback also and then apply that. Similarly, participant 10 added: I believe that discussion is very effective. It is the most beneficial thing for me thus far. The therapists also obtained access to books and journal articles from colleagues when needed. When participant 6 discussed how she kept herself updated on EBP, she mentioned: I’m a clinical employer for (the institution’s name). I don’t have access to these types of engines [PubMed and scope of science]. So, I have to be like a worker in university or use another person’s account in the university to get access to it. I did not pay for accessibility, I usually use my colleagues’ account. For these participants, the supportive social environment also helped to overcome other barriers to implementing EBP (ie, organisational sources providing accessibility to resources and implementation strategies towards improving professional development), which is further discussed in the second theme below. Theme 2: organisational support for EBP use The second theme focused on the practice of using EBP in relation to the limited organisational support therapists receive from their employers. This theme is described and expanded by considering three subthemes: (a) cost and inaccessibility, (b) continuous professional development (CPD) and (c) effective strategies to be implemented. Subtheme A: cost and inaccessibility to EBP resources For the participants, the main barriers to using EBP were the inaccessibility of resources (ie, books, journal articles and workshops) and their high costs. Participant 2 stated: Resources are there, but we don’t know how to access them [without the organization giving you access]… You cannot access the university’s library… To update my knowledge, I’m ready to spend money, but not if it’s too expensive. Nevertheless, some practical techniques require certification and are costly. Participants complained about the high costs, as there are neither nearby organisations offering such courses nor financial support from employers. Therefore, therapists must travel abroad for certifiication. Participant 4 indicated: Access to journals it is a big factor [as a barrier to EBP use] sometimes it is from my personal income… [My employer] will support you financially only for conferences if you meet the minimum 10 years of service in the (her institution’s name). Okay but throughout these 10 years, I need the EBP to work on. You know these rules limit the majority of us. Not everybody is capable considering their financial status. Several therapists reported spending their annual leave and money as the only means of enhancing their professional development. Subtheme B: CPD Not receiving financial support from employers was a major concern for the participants. Several participants believed that the lack of support from employers was owing to an organisational underestimation of the importance of CPD. Participant 9 mentioned: Some of the clinics don’t want to change the practice or anything; they want you to provide the service; this is what they care about, the number of patients. Therefore, participants perceived that institutions found importance in the quantity, and not in the quality, rendered through professional development. The lack of CPD support also extended to therapists’ research-related knowledge. Participant 7 stated: I read [research articles] sometimes. I cannot understand and have to keep reading because of the quantitative and qualitative analysis. I read about these analysis to understand what the article says and what the study’s findings are. They are written in a language not understandable by us. They have a different language. Accordingly, participants believed that these issues could be solved by their employers indicating ways to overcome the barriers highlighted in the next subtheme. Subtheme C: effective strategies for implementing EBP To minimise barriers, participants highlighted facilitators as potential strategies to help therapists and their employees implement EBP. The participants also suggested that employers could be supportive and push for using EBP. Participant 2 proposed: If there’s a common library, especially for (the institution name), it is a big institution, so if they have a common library that the employees can go to and approach it. Participant 6 also highlighted: Free access to library ummm remote access no need to come to the university to access. Some of the participants’ suggested strategies can be summarised as having a library in their workplace, access to resources that can be accessed remotely from home and more collaboration with the university in organising seminars and workshops. Therefore, the proposed ideas and strategies presented by organisations are essential to address barriers from the therapists’ perspectives on providing services. Theme 3: creativity and flexibility The high costs and inaccessibility of EBP resources were not the only limiting factors or barriers to EBP, as highlighted by participants. Time and limited resources were also considered. However, despite the barriers and negativity highlighted by the participants, they remained positive. Subtheme A: time management All the participants acknowledged time as a requirement to stay updated on EBP. However, they believed that there was no shortage of time. Time is not a barrier if you plan it appropriately, time at all won’t be a barrier. (Participant 1) Participants believed that it is about the need for time management was of primary importance as illustrated by Participant 4: We work for approximately 7 hours, and the day is 24 hours, so you have the time. If we sleep for 8 hours, and another 8 hours for work and 8 for leisure, you can take 1 hour and a half from each. Subtheme B: implementation of EBP in daily practices The participants shared their concerns regarding the implementation of evidence in their clinical practice. Implementation is often impossible owing to the lack of published research topics on specific conditions/disabilities in the region. There are like rare diagnoses, you won’t find updated evidence on; other things, there are no studies about at all (Participant 4). In addition, there are no studies applicable to their community, culture, or population, as participant 8 mentioned: you might find studies, but not in your country. You need something from your community… in our Arab community, we do not have many studies, and this is a barrier a very difficult barrier. Nevertheless, there is limited guidance in the available resources about how to apply the reported techniques in the evidence in practice. However, other participants felt that there were no such barriers and that occupational therapists use creativity to find solutions. There are no barriers. We are OT. We are creative minds. We can make everything simple. (Participant 5) Thus, participants highlighted the importance of being flexible and using their creativity as effectively as possible in their practice. In addition, some therapists believed that the application of EBP occasionally contradicts the client-centred approach. However, creativity was raised to implement the evidence while addressing the client and family concerns. Participant 1 elaborated on the client-centred practice by stating: Sometimes, we modify it [the intervention] according to plan to hold interest of the child we modify that activity. All participants agreed about the importance of using EBP, clarifying how they used it in practice on every occasion. Participant 6 explained: I use EBP all the time, I don’t put a plan or a strategy for a child without evidence Participants explained a variety of motivational sources that encouraged them to use EBP. These sources are condensed into three main subthemes: (a) personal factors, (b) clients’ outcomes and (c) supportive social environment. Subtheme A: personal factors Participants claimed that being an occupational therapist gave them a sense of responsibility towards their clients. Participant 5 commented: For me, being an OT, I feel that is a huge responsibility job… you are dealing with children, you cannot use trial and error. You need proper evidence, otherwise [if EBP is not used] it will lead to negative results. It will affect the child’s future, so EBP is the best. Participants 3 also mentioned: It is my duty, and I mean, the fear from God. A sense of responsibility can also arise from client and family expectations. The use of EBP satisfies therapists by giving them confidence in their jobs. For example, participant 9 mentioned: I want to be more confident when applying a technique… It [EBP] will give you the guidelines and how to treat and how to start from basically start from the assessment till the intervention and even going to the home program. Subtheme B: client outcomes Several participants indicated client outcomes as motivators for improving practice methods. The participants agreed that being an occupational therapist gave them a passion for helping others. Participant 6 stated: My clients are my motivators to use EBP. To see the outcome in your clients. Thus, outcomes that motivate therapists in their work include progress in client conditions. Participant 8 added: I’m a man who loves when people remember me, I love when I provide service to see people’s satisfaction. It is an indescribable, blissful feeling of contentment. Thus, outcomes are not only measured by clients’ progress; the satisfaction level of the entire family is another factor. Subtheme C: supportive social work environment The third subtheme highlighted how and diverse environments proved to be motivational sources for these therapists. The role of supportive colleagues was evident from the data. All participants highlighted the role of peers applying EBP (ie, formal discussion, monthly in-services, weekly journal clubs/seminars and friendly conversations). Participant 7 highlighted diverse environmental support with: With our colleagues, we discuss things such as conditions. We have group discussions about cases… taking their advice to implement, get their feedback also and then apply that. Similarly, participant 10 added: I believe that discussion is very effective. It is the most beneficial thing for me thus far. The therapists also obtained access to books and journal articles from colleagues when needed. When participant 6 discussed how she kept herself updated on EBP, she mentioned: I’m a clinical employer for (the institution’s name). I don’t have access to these types of engines [PubMed and scope of science]. So, I have to be like a worker in university or use another person’s account in the university to get access to it. I did not pay for accessibility, I usually use my colleagues’ account. For these participants, the supportive social environment also helped to overcome other barriers to implementing EBP (ie, organisational sources providing accessibility to resources and implementation strategies towards improving professional development), which is further discussed in the second theme below. Participants claimed that being an occupational therapist gave them a sense of responsibility towards their clients. Participant 5 commented: For me, being an OT, I feel that is a huge responsibility job… you are dealing with children, you cannot use trial and error. You need proper evidence, otherwise [if EBP is not used] it will lead to negative results. It will affect the child’s future, so EBP is the best. Participants 3 also mentioned: It is my duty, and I mean, the fear from God. A sense of responsibility can also arise from client and family expectations. The use of EBP satisfies therapists by giving them confidence in their jobs. For example, participant 9 mentioned: I want to be more confident when applying a technique… It [EBP] will give you the guidelines and how to treat and how to start from basically start from the assessment till the intervention and even going to the home program. Several participants indicated client outcomes as motivators for improving practice methods. The participants agreed that being an occupational therapist gave them a passion for helping others. Participant 6 stated: My clients are my motivators to use EBP. To see the outcome in your clients. Thus, outcomes that motivate therapists in their work include progress in client conditions. Participant 8 added: I’m a man who loves when people remember me, I love when I provide service to see people’s satisfaction. It is an indescribable, blissful feeling of contentment. Thus, outcomes are not only measured by clients’ progress; the satisfaction level of the entire family is another factor. The third subtheme highlighted how and diverse environments proved to be motivational sources for these therapists. The role of supportive colleagues was evident from the data. All participants highlighted the role of peers applying EBP (ie, formal discussion, monthly in-services, weekly journal clubs/seminars and friendly conversations). Participant 7 highlighted diverse environmental support with: With our colleagues, we discuss things such as conditions. We have group discussions about cases… taking their advice to implement, get their feedback also and then apply that. Similarly, participant 10 added: I believe that discussion is very effective. It is the most beneficial thing for me thus far. The therapists also obtained access to books and journal articles from colleagues when needed. When participant 6 discussed how she kept herself updated on EBP, she mentioned: I’m a clinical employer for (the institution’s name). I don’t have access to these types of engines [PubMed and scope of science]. So, I have to be like a worker in university or use another person’s account in the university to get access to it. I did not pay for accessibility, I usually use my colleagues’ account. For these participants, the supportive social environment also helped to overcome other barriers to implementing EBP (ie, organisational sources providing accessibility to resources and implementation strategies towards improving professional development), which is further discussed in the second theme below. The second theme focused on the practice of using EBP in relation to the limited organisational support therapists receive from their employers. This theme is described and expanded by considering three subthemes: (a) cost and inaccessibility, (b) continuous professional development (CPD) and (c) effective strategies to be implemented. Subtheme A: cost and inaccessibility to EBP resources For the participants, the main barriers to using EBP were the inaccessibility of resources (ie, books, journal articles and workshops) and their high costs. Participant 2 stated: Resources are there, but we don’t know how to access them [without the organization giving you access]… You cannot access the university’s library… To update my knowledge, I’m ready to spend money, but not if it’s too expensive. Nevertheless, some practical techniques require certification and are costly. Participants complained about the high costs, as there are neither nearby organisations offering such courses nor financial support from employers. Therefore, therapists must travel abroad for certifiication. Participant 4 indicated: Access to journals it is a big factor [as a barrier to EBP use] sometimes it is from my personal income… [My employer] will support you financially only for conferences if you meet the minimum 10 years of service in the (her institution’s name). Okay but throughout these 10 years, I need the EBP to work on. You know these rules limit the majority of us. Not everybody is capable considering their financial status. Several therapists reported spending their annual leave and money as the only means of enhancing their professional development. Subtheme B: CPD Not receiving financial support from employers was a major concern for the participants. Several participants believed that the lack of support from employers was owing to an organisational underestimation of the importance of CPD. Participant 9 mentioned: Some of the clinics don’t want to change the practice or anything; they want you to provide the service; this is what they care about, the number of patients. Therefore, participants perceived that institutions found importance in the quantity, and not in the quality, rendered through professional development. The lack of CPD support also extended to therapists’ research-related knowledge. Participant 7 stated: I read [research articles] sometimes. I cannot understand and have to keep reading because of the quantitative and qualitative analysis. I read about these analysis to understand what the article says and what the study’s findings are. They are written in a language not understandable by us. They have a different language. Accordingly, participants believed that these issues could be solved by their employers indicating ways to overcome the barriers highlighted in the next subtheme. Subtheme C: effective strategies for implementing EBP To minimise barriers, participants highlighted facilitators as potential strategies to help therapists and their employees implement EBP. The participants also suggested that employers could be supportive and push for using EBP. Participant 2 proposed: If there’s a common library, especially for (the institution name), it is a big institution, so if they have a common library that the employees can go to and approach it. Participant 6 also highlighted: Free access to library ummm remote access no need to come to the university to access. Some of the participants’ suggested strategies can be summarised as having a library in their workplace, access to resources that can be accessed remotely from home and more collaboration with the university in organising seminars and workshops. Therefore, the proposed ideas and strategies presented by organisations are essential to address barriers from the therapists’ perspectives on providing services. For the participants, the main barriers to using EBP were the inaccessibility of resources (ie, books, journal articles and workshops) and their high costs. Participant 2 stated: Resources are there, but we don’t know how to access them [without the organization giving you access]… You cannot access the university’s library… To update my knowledge, I’m ready to spend money, but not if it’s too expensive. Nevertheless, some practical techniques require certification and are costly. Participants complained about the high costs, as there are neither nearby organisations offering such courses nor financial support from employers. Therefore, therapists must travel abroad for certifiication. Participant 4 indicated: Access to journals it is a big factor [as a barrier to EBP use] sometimes it is from my personal income… [My employer] will support you financially only for conferences if you meet the minimum 10 years of service in the (her institution’s name). Okay but throughout these 10 years, I need the EBP to work on. You know these rules limit the majority of us. Not everybody is capable considering their financial status. Several therapists reported spending their annual leave and money as the only means of enhancing their professional development. Not receiving financial support from employers was a major concern for the participants. Several participants believed that the lack of support from employers was owing to an organisational underestimation of the importance of CPD. Participant 9 mentioned: Some of the clinics don’t want to change the practice or anything; they want you to provide the service; this is what they care about, the number of patients. Therefore, participants perceived that institutions found importance in the quantity, and not in the quality, rendered through professional development. The lack of CPD support also extended to therapists’ research-related knowledge. Participant 7 stated: I read [research articles] sometimes. I cannot understand and have to keep reading because of the quantitative and qualitative analysis. I read about these analysis to understand what the article says and what the study’s findings are. They are written in a language not understandable by us. They have a different language. Accordingly, participants believed that these issues could be solved by their employers indicating ways to overcome the barriers highlighted in the next subtheme. To minimise barriers, participants highlighted facilitators as potential strategies to help therapists and their employees implement EBP. The participants also suggested that employers could be supportive and push for using EBP. Participant 2 proposed: If there’s a common library, especially for (the institution name), it is a big institution, so if they have a common library that the employees can go to and approach it. Participant 6 also highlighted: Free access to library ummm remote access no need to come to the university to access. Some of the participants’ suggested strategies can be summarised as having a library in their workplace, access to resources that can be accessed remotely from home and more collaboration with the university in organising seminars and workshops. Therefore, the proposed ideas and strategies presented by organisations are essential to address barriers from the therapists’ perspectives on providing services. The high costs and inaccessibility of EBP resources were not the only limiting factors or barriers to EBP, as highlighted by participants. Time and limited resources were also considered. However, despite the barriers and negativity highlighted by the participants, they remained positive. Subtheme A: time management All the participants acknowledged time as a requirement to stay updated on EBP. However, they believed that there was no shortage of time. Time is not a barrier if you plan it appropriately, time at all won’t be a barrier. (Participant 1) Participants believed that it is about the need for time management was of primary importance as illustrated by Participant 4: We work for approximately 7 hours, and the day is 24 hours, so you have the time. If we sleep for 8 hours, and another 8 hours for work and 8 for leisure, you can take 1 hour and a half from each. Subtheme B: implementation of EBP in daily practices The participants shared their concerns regarding the implementation of evidence in their clinical practice. Implementation is often impossible owing to the lack of published research topics on specific conditions/disabilities in the region. There are like rare diagnoses, you won’t find updated evidence on; other things, there are no studies about at all (Participant 4). In addition, there are no studies applicable to their community, culture, or population, as participant 8 mentioned: you might find studies, but not in your country. You need something from your community… in our Arab community, we do not have many studies, and this is a barrier a very difficult barrier. Nevertheless, there is limited guidance in the available resources about how to apply the reported techniques in the evidence in practice. However, other participants felt that there were no such barriers and that occupational therapists use creativity to find solutions. There are no barriers. We are OT. We are creative minds. We can make everything simple. (Participant 5) Thus, participants highlighted the importance of being flexible and using their creativity as effectively as possible in their practice. In addition, some therapists believed that the application of EBP occasionally contradicts the client-centred approach. However, creativity was raised to implement the evidence while addressing the client and family concerns. Participant 1 elaborated on the client-centred practice by stating: Sometimes, we modify it [the intervention] according to plan to hold interest of the child we modify that activity. All the participants acknowledged time as a requirement to stay updated on EBP. However, they believed that there was no shortage of time. Time is not a barrier if you plan it appropriately, time at all won’t be a barrier. (Participant 1) Participants believed that it is about the need for time management was of primary importance as illustrated by Participant 4: We work for approximately 7 hours, and the day is 24 hours, so you have the time. If we sleep for 8 hours, and another 8 hours for work and 8 for leisure, you can take 1 hour and a half from each. The participants shared their concerns regarding the implementation of evidence in their clinical practice. Implementation is often impossible owing to the lack of published research topics on specific conditions/disabilities in the region. There are like rare diagnoses, you won’t find updated evidence on; other things, there are no studies about at all (Participant 4). In addition, there are no studies applicable to their community, culture, or population, as participant 8 mentioned: you might find studies, but not in your country. You need something from your community… in our Arab community, we do not have many studies, and this is a barrier a very difficult barrier. Nevertheless, there is limited guidance in the available resources about how to apply the reported techniques in the evidence in practice. However, other participants felt that there were no such barriers and that occupational therapists use creativity to find solutions. There are no barriers. We are OT. We are creative minds. We can make everything simple. (Participant 5) Thus, participants highlighted the importance of being flexible and using their creativity as effectively as possible in their practice. In addition, some therapists believed that the application of EBP occasionally contradicts the client-centred approach. However, creativity was raised to implement the evidence while addressing the client and family concerns. Participant 1 elaborated on the client-centred practice by stating: Sometimes, we modify it [the intervention] according to plan to hold interest of the child we modify that activity. EBP is essential to ensure high-quality occupational therapy services for patients in need. The study explored 10 occupational therapists’ perspectives on the factors related to the use of EBP in Kuwait to support practitioners’ implementations of such skills. Although the aim was to use a paediatric population in particular, the participants’ answers could be applicable to other age groups. Although the participants in the study by Baig et al were paediatric therapists, their research findings were agreeable to the factors of using EBP in other populations in terms of accessibility to literature, lacking time to look for evidence despite their motivation to find new interventions. Another obvious issue regarding the paediatric population was the participants’ elaboration of the family’s expectations and satisfaction levels towards therapy outcomes. Considering the family in the treatment plan emphasises the importance of family-centred practice when dealing with children. Furthermore, a systematic review highlighted that family-centred practice is a high-quality and effective intervention in the treatment for children when targeting functional outcomes. In accordance with the present results, previous studies have demonstrated positive attitudes of occupational therapists towards EBP. However, the total number of participants dependent on the use of EBP in this study differed from that in most previous studies where a lack of EBP utilisation was illustrated. All participants in this study highlighted their EBP utilisation which could be due to an imbalance in power weighted towards the researcher in the interviewer–participant relationship. This study measured the therapists’ perspectives qualitatively, whereas previous studies used objective measures for evidence-based activities. Accordingly, the participants in this study may have provided answers that do not represent their true perspectives; rather, they may have said what they think the researcher wants to hear. Thomas et al ’s exclusive study found that positive attitudes towards EBP were translated into practice, due to the studied population of recent graduates of occupational and physical therapy. In addition, the study proposed that recent graduates are more likely to accept EBP than other senior practitioners, which was also proposed by Baig et al . Universities play an important role in therapists’ utilisation of EBP in terms of available facilities and education. This also accords with previous research where occupational therapists felt inadequately prepared for decision-making reflecting that they had not acquired relevant knowledge/skill at university. Nevertheless, prior studies have noted therapists’ awareness of the need to improve their EBP research skills, which was also evident in this study. Although occupational therapy curricula focus on EBP as an accreditation standard by the World Federation of Occupational Therapists, a possible explanation could be the lack of practical training on its implementation and addressing this issue only theoretically in education. The issue of inadequate EBP education despite the presence of EBP-specific topics in undergraduate education is not limited to occupational therapy; physical therapy programmes have the same issue. Few researchers have proposed and evaluated the effectiveness of focusing on EBP and critical appraisal courses for students in which significant benefits can be noted after completing the courses. These promising data suggest the adoption of such teaching techniques in educational curricula. Generally, participation in EBP education has been found to enhance clinicians’ EBP knowledge and skills. The most obvious finding limiting EBP implementation, consistent with previous studies, is the lack of accessibility of research evidence and funding. When accessibility was supported by the therapists’ organisation, it was not an issue, as noted in the literature. Bennett et al interviewed 30 occupational therapists and found that attempting to increase the use of EBP requires a workplace culture that encourages its use, which is highly influenced by the organisation. Unfortunately, the participants in this study lacked support. The lack of organisational support found in this study could be because, in Kuwait, occupational therapy services are very limited in both the governmental and private sectors; thus, the market is not competitive, owing to which organisations are not forced to spend money on funding CPD and implementing EBP. Accordingly, the participants in this study had no option to improve other than self-funding their CPD during their annual leave. This initiative by the participants could be explained by their high sense of responsibility towards their profession and patients, as elaborated in the results. In contrast to the work of Thomas et al , our findings regarding the implementation of EBP were not limited to those working in the private sector; all participants showed a high passion for its utilisation. Nevertheless, CPD helps therapists improve their self-confidence, which is important and associated with the use of EBP. The occupational therapy market in Kuwait is non-competitive, as highlighted earlier, because occupational therapy is still considered a nascent practice there and in nearby regions. Hence, studies in this region are lacking, which is a barrier highlighted by the participants. When evidence is available, its scope is not always applicable to this community and culture, a found by Alrowayeh et al . Therefore, when the participants in this study applyed the evidence, they did not follow EBP alone. Participants in our study highlighted their use of creativity in their clinical practice, using their judgement to adapt the available evidence and apply it in a way that matches their community of practice. This flexibility can be attributed to creativity, which is encouraged in occupational therapy education curricula. Nevertheless, the participants also mentioned that their use of creativity helped them adopt a client-centred approach while implementing EBP. They believed that the sole application of EBP might contradict the client-centred approach. This limited understanding of the meaning of EBP might be due to their limited knowledge in regards to the implication of EBP. According to Melnyk et al , EBP Implementation Scale, several items addressed client-centred EBP activities. Therefore, the best scientific evidence required to drive practice must be integrated and modified based on the clinicians’ expertise, the client/family situation and their related values. It is worth noting that on some occasions, the barrier to implementing EBP cannot be addressed by flexibility and creativity when the available evidence is unclear to therapists or insufficient information is provided. To overcome this lack of evidence, the participants appreciated their colleagues’ involvement. Consulting colleagues was also highlighted by Rochette et al as the preferred strategy by occupational therapists in Canada with regard to their professional competencies in their clinical practice. Nevertheless, occupational therapists in Matus et al study highlighted that when a supportive colleague (ie, allowed them to feel comfortable asking for help, gave constructive feedback and was approachable and responsive) was available, it fostered their learning. It is noteworthy to make therapists aware that social support from peers can only be possible through well-informed peers in that specific field. Alshehri et al reported that therapists initially seeking the opinions of colleagues before looking for literature were likely to have insufficient knowledge of how to access databases and research articles, which was a highlighted barrier in this study. Furthermore, our findings expanded the understanding of the time factor reported in the literature. It has been identified that lack of time is one of the main barriers to EBP ; however, the use of a qualitative approach in this study helped in understanding that time management skills should be used, as therapists need more time dedicated to searching the literature and attending CPD training sessions. Implications of the study The study has several implications and recommendations for EBP education and practice: Universities need to consider topics related to EBP and provide practical training on its implementation. CPD courses in Kuwait and the surrounding region need to be offered routinely to practitioners of EBP utilisation and implementation to keep therapists updated with research-related knowledge and skills, including understanding papers and critically appraising evidence. Organisational roles in delivering effective health services for patients should be clarified by advancing new knowledge regarding clinical education practices to help create a supportive work environment for CPD (eg, encouraging seminar discussions and accessible educational resources). Researchers can focus more on possible and effective strategies to enhance EBP in educational curricula and its utilisation. Limitations The results of this qualitative study represent paediatric occupational therapists’ perceptions of factors related to EBP implementation in a single country. However, the transferability of the findings can be considered by readers when cultural backgrounds and healthcare systems are considered. Another limitation is that the findings captured by therapists who could speak and understand English, and data were collected from the participants’ subjective points of view. Accordingly, adopting a mixed-methods design could enhance the trustworthiness of the data by quantitively investigating the EBP activities in which therapists engage. Conclusion This study focused on the factors related to the implementation of EBP from the perspective of paediatric occupational therapists in Kuwait. Several motivational resources encourage therapists to use EBP in their clinical practice, including personal motivation and client outcomes, which are supported by a supportive social work environment. However, barriers regarding accessibility to resources and required funds were due to the lack of organisational support for EBP. Nonetheless, therapists must use creativity and flexibility in their practice to overcome these challenges. Moreover, the findings can be extended to improve educational curricula, promote routine clinical practice of incorporating EBP into everyday practice and advance new knowledge regarding clinical education practices. The study has several implications and recommendations for EBP education and practice: Universities need to consider topics related to EBP and provide practical training on its implementation. CPD courses in Kuwait and the surrounding region need to be offered routinely to practitioners of EBP utilisation and implementation to keep therapists updated with research-related knowledge and skills, including understanding papers and critically appraising evidence. Organisational roles in delivering effective health services for patients should be clarified by advancing new knowledge regarding clinical education practices to help create a supportive work environment for CPD (eg, encouraging seminar discussions and accessible educational resources). Researchers can focus more on possible and effective strategies to enhance EBP in educational curricula and its utilisation. The results of this qualitative study represent paediatric occupational therapists’ perceptions of factors related to EBP implementation in a single country. However, the transferability of the findings can be considered by readers when cultural backgrounds and healthcare systems are considered. Another limitation is that the findings captured by therapists who could speak and understand English, and data were collected from the participants’ subjective points of view. Accordingly, adopting a mixed-methods design could enhance the trustworthiness of the data by quantitively investigating the EBP activities in which therapists engage. This study focused on the factors related to the implementation of EBP from the perspective of paediatric occupational therapists in Kuwait. Several motivational resources encourage therapists to use EBP in their clinical practice, including personal motivation and client outcomes, which are supported by a supportive social work environment. However, barriers regarding accessibility to resources and required funds were due to the lack of organisational support for EBP. Nonetheless, therapists must use creativity and flexibility in their practice to overcome these challenges. Moreover, the findings can be extended to improve educational curricula, promote routine clinical practice of incorporating EBP into everyday practice and advance new knowledge regarding clinical education practices. 10.1136/bmjopen-2024-086617 online supplemental file 1 |
Robotic laparoendoscopic single-site ultrasound-guided renal artery balloon catheter occluded hybrid partial nephrectomy (LESS-HPN): a prospective pilot study | 613f8d87-a9f2-4446-b240-5f90d8a8b89f | 11818040 | Surgical Procedures, Operative[mh] | Robot assisted partial nephrectomy (RAPN) is recognized as an effective minimally invasive alternative to open surgery for the treatment of clinically localized renal tumors . Laparoendoscopic single-site surgery (LESS) seeks to enhance the minimal invasiveness of laparoscopic procedures . Robotic laparoendoscopic single-site partial nephrectomy (R-LESS PN) has demonstrated technical feasibility and safety, yielding superior cosmetic outcomes, reduced postoperative analgesic requirements, and faster recovery . However, the renal hilum dissection becomes more complicated due to the external collisions of the instruments and the restricted motion of the assistant due to the narrow space . Consequently, Kaouk et al. reported R-LESS PN with the omission of hilar clamping, primarily for selected exophytic tumors . Despite these advancements, R-LESS PN still requires refinements to address these challenges . Previously, we introduced a laparoscopic ultrasound (LUS) guided intervention technique to assist off-clamp partial nephrectomy, called Ultrasound-Guided Renal Artery Balloon Catheter Occluded Hybrid Partial Nephrectomy (UBo-HPN) . This technique involves temporarily occluding the renal arterial blood supply using a Fogarty balloon catheter. Preliminary results from our cohort study indicated that UBo-HPN is a safe approach that yields comparable surgical, oncological, and functional outcomes . In this study, we explored a novel surgical approach aimed at achieving intracorporeal minimally invasiveness: robotic laparoendoscopic single-site ultrasound-guided renal artery balloon catheter occluded hybrid partial nephrectomy (LESS-HPN) and reported its short-term outcomes in the first 10 cases. Patients A total of 10 patients with T1 stage renal tumors, enrolled between March and July 2023, were prospectively included in this study. The inclusion criteria included the following: (i) patients with renal tumor and clinical stage were limited to T1NoMo, which was confirmed by image examination; (ii) patients aged 16–85 years. The exclusion criteria included the following: (i) patients with renal tumors near renal hilus or invading the renal sinus area that may require simultaneous occlusion of the renal vein during the resection of the tumor; (ii) patients whose tumor is supplied by multiple arteries and is difficult to block with one balloon; and (iii) patients with severe cardiovascular and cerebrovascular diseases, especially with large vascular lesions. Informed consent was obtained from all patients, and assessments were conducted by the same physician. A preoperative evaluation was jointly performed by one surgeon and one specialized sonographer, based primarily on preoperative ultrasound, contrast enhanced CT, and CTA (only for cases requiring branch artery occlusion). This study was registered in the Chinese Clinical Trial Registry and approved by the institutional ethics review board. This work has been reported in line with the STROCSS criteria . LESS-HPN technique All surgeries were performed using the da Vinci XI Surgical System (Intuitive Surgical, Sunnyvale, CA, USA). After administering general anesthesia, the patient assumed a 70-degree lateral position. Following the open Hasson technique, a 4.5–5.0 cm lateral rectus incision was made on the affected side, and a Freeport (Ningbo SensCure Biotechnology) was inserted. The Freeport was equipped with one 8-mm optic trocar and two 8-mm working trocars, which were positioned at 12 o’clock, 3 o’clock, and 9 o’clock directions, respectively. Additionally, two assistant trocars were placed within the Freeport. (Fig. ). Through the two assistant channels on the Freeport, the monitoring area of the LUS probe was able to cover a large area of abdominal organsand vessels (Fig. ). After locating the tumor with LUS, the perirenal fat was directly incised to expose the renal tumor border. Unlike conventional PN, the paracolic sulci, most of the perirenal fat, and renal hilum structures did not need to be dissected. Actually, prophylactic dissection of the renal artery was performed in the first few cases of UBo-HPN, but this proved unnecessary. Therefore, the renal artery was not dissected in any of the subsequent LESS-HPN cases. Ultrasound-guided femoral artery puncture was performed, and a vascular sheath was inserted. Under full LUS guidance, a catheter and a guidewire were advanced through the femoral vascular sheath into the main, branch, or accessory renal artery. A Fogarty balloon catheter was then placed at the target artery along the guidewire, inflated with saline to occlude the blood supply. After confirming complete occlusion of the tumor’s arterial blood supply using color Doppler flow imaging (CDFI) or contrast-enhanced laparoscopic ultrasound (CE-LUS), the tumor was routinely excised and the wound was sutured in the conventional manner. The balloon catheter was deflated and withdrawn, and the renal wound was inspected for bleeding. The specimen was extracted through the incision. and a drain tube was placed at its lowest point (Supplementary Fig. ). Hemostasis at the femoral artery puncture site was achieved using an arterial closure device. Technical success was defined as the simultaneous achievement of: (i) complete occlusion of arterial blood flow confirmed by CDFI, or CE-LUS, with direct visualization during the following tumor resection and suturing, and (ii) absence of conversion to conventional multiport robot assisted or laparoscopic partial nephrectomy. All patients received follow-up every three to six months. Study variables Baseline information was recorded for all patients. The complexity of tumors was determined with the R.E.N.A.L. score . Intraoperative variables including operation time, warm ischemia time (WIT), estimated blood loss (EBL), blood transfusion, and incision length was recorded. The number of additional access and any kind of conversion were recorded.Intraoperative or postoperative complications were grade according to Clavien-Dindo classification . Patients received follow-up every three to six months in the first two years and then annually. Statistical analysis Categorical variables were expressed as frequencies and percentages. Continuous variables that conformed to a normal distribution were expressed as mean ± standard deviation (SD). Those that did not fit the normal distribution were expressed as median and interquartile range (IQR). Comparisons between the two groups (anterior vs. posterior location) were conducted using Student t-tests, and comparisons of dichotomous variables were conducted using Fisher’s exact probability method. Two-sided P value < 0.05 was considered for statistical significance. A total of 10 patients with T1 stage renal tumors, enrolled between March and July 2023, were prospectively included in this study. The inclusion criteria included the following: (i) patients with renal tumor and clinical stage were limited to T1NoMo, which was confirmed by image examination; (ii) patients aged 16–85 years. The exclusion criteria included the following: (i) patients with renal tumors near renal hilus or invading the renal sinus area that may require simultaneous occlusion of the renal vein during the resection of the tumor; (ii) patients whose tumor is supplied by multiple arteries and is difficult to block with one balloon; and (iii) patients with severe cardiovascular and cerebrovascular diseases, especially with large vascular lesions. Informed consent was obtained from all patients, and assessments were conducted by the same physician. A preoperative evaluation was jointly performed by one surgeon and one specialized sonographer, based primarily on preoperative ultrasound, contrast enhanced CT, and CTA (only for cases requiring branch artery occlusion). This study was registered in the Chinese Clinical Trial Registry and approved by the institutional ethics review board. This work has been reported in line with the STROCSS criteria . All surgeries were performed using the da Vinci XI Surgical System (Intuitive Surgical, Sunnyvale, CA, USA). After administering general anesthesia, the patient assumed a 70-degree lateral position. Following the open Hasson technique, a 4.5–5.0 cm lateral rectus incision was made on the affected side, and a Freeport (Ningbo SensCure Biotechnology) was inserted. The Freeport was equipped with one 8-mm optic trocar and two 8-mm working trocars, which were positioned at 12 o’clock, 3 o’clock, and 9 o’clock directions, respectively. Additionally, two assistant trocars were placed within the Freeport. (Fig. ). Through the two assistant channels on the Freeport, the monitoring area of the LUS probe was able to cover a large area of abdominal organsand vessels (Fig. ). After locating the tumor with LUS, the perirenal fat was directly incised to expose the renal tumor border. Unlike conventional PN, the paracolic sulci, most of the perirenal fat, and renal hilum structures did not need to be dissected. Actually, prophylactic dissection of the renal artery was performed in the first few cases of UBo-HPN, but this proved unnecessary. Therefore, the renal artery was not dissected in any of the subsequent LESS-HPN cases. Ultrasound-guided femoral artery puncture was performed, and a vascular sheath was inserted. Under full LUS guidance, a catheter and a guidewire were advanced through the femoral vascular sheath into the main, branch, or accessory renal artery. A Fogarty balloon catheter was then placed at the target artery along the guidewire, inflated with saline to occlude the blood supply. After confirming complete occlusion of the tumor’s arterial blood supply using color Doppler flow imaging (CDFI) or contrast-enhanced laparoscopic ultrasound (CE-LUS), the tumor was routinely excised and the wound was sutured in the conventional manner. The balloon catheter was deflated and withdrawn, and the renal wound was inspected for bleeding. The specimen was extracted through the incision. and a drain tube was placed at its lowest point (Supplementary Fig. ). Hemostasis at the femoral artery puncture site was achieved using an arterial closure device. Technical success was defined as the simultaneous achievement of: (i) complete occlusion of arterial blood flow confirmed by CDFI, or CE-LUS, with direct visualization during the following tumor resection and suturing, and (ii) absence of conversion to conventional multiport robot assisted or laparoscopic partial nephrectomy. All patients received follow-up every three to six months. Baseline information was recorded for all patients. The complexity of tumors was determined with the R.E.N.A.L. score . Intraoperative variables including operation time, warm ischemia time (WIT), estimated blood loss (EBL), blood transfusion, and incision length was recorded. The number of additional access and any kind of conversion were recorded.Intraoperative or postoperative complications were grade according to Clavien-Dindo classification . Patients received follow-up every three to six months in the first two years and then annually. Categorical variables were expressed as frequencies and percentages. Continuous variables that conformed to a normal distribution were expressed as mean ± standard deviation (SD). Those that did not fit the normal distribution were expressed as median and interquartile range (IQR). Comparisons between the two groups (anterior vs. posterior location) were conducted using Student t-tests, and comparisons of dichotomous variables were conducted using Fisher’s exact probability method. Two-sided P value < 0.05 was considered for statistical significance. From March to July 2023, a total of 10 patients were included in the study, as outlined in Table . Among them, five patients (50%) had an anteriorly located tumor, while the remaining five patients (50%) had a posteriorly located tumor. Four patients (40%) had a history of one or more prior abdominal surgeries. The mean tumor diameter was 2.9 ± 1.1 cm, predominantly located on the left side in 80% of cases. Tumors were mostly T1a stage (80%), with 70% having a R.E.N.A.L. score less than 7. No statistically significant differences were observed in the baseline data between different tumor location (anterior and posterior) groups. LESS-HPN was successfully completed in all 10 cases, as indicated in Table . The mean operative time was 103.3 ± 11.1 min, including 21.0 ± 2.7 min of WIT. The mean 48-hour post-operative serum creatinine increase (δSerum creatinine) was 1.6 ± 12.0 µmol/L, accompanied by an estimated glomerular filtration rate (eGFR) decreased of 3.1 ± 8.1 ml/min/1.73 m 2 . Hemoglobin decreased by 1.8 ± 6.9 g/L postoperatively. The mean EBL was 42.0 ± 22.5 ml, with no intraoperative blood transfusion required in any case. The median incision diameter, measured upon suturing completion, was 4.6 (IQR 4.5,4.7) cm. There was no case of adding additional access. And no conversion to renal artery clamping, to radical nephrectomy, to open surgery, or to standard robot laparoscopy occurred. The operative time for posterior tumors was significantly shorter ( p = 0.041). Other surgical outcomes were found to be independent of tumor location (anterior/posterior). Among the 10 patients, renal artery occlusion was accomplished in six cases (60%) and branch artery occlusion was accomplished in two (20%). For patients with accessory renal arteries, one patient (10%) underwent accessory renal artery occlusion without main renal artery occlusion. Another had main renal artery occlusion without accessory artery occlusion (Supplementary Table ). The median follow-up time was 10.5 mo. Postoperative pathology confirmed no positive surgical margin, and no recurrence was observed during follow-up. Four patients (40%) were diagnosed with clear cell renal cell carcinoma. Angiomyolipoma was diagnosed in 2 cases (20%). Papillary renal cell carcinoma, chromophobe renal cell carcinoma, oncocytoma and multilocular cystic renal neoplasm was diagnosed in 1 case (10%) respectively (Supplementary Table ). Minimally invasive surgery, with its focus on reducing invasiveness, prioritizes better cosmetic outcomes and advantages in postoperative analgesia . R-LESS further reduces complication risks compared to conventional LESS . Despite the da Vinci single-port (SP) system offering improved integration with LESS surgery, its limited availability in most medical centers is largely due to cost considerations. UBo-HPN aims to maximize minimally invasive intracorporeal techniques. This technique eliminates the need for dissection of the renal hilum, paracolic sulci, or kidney mobilization. The intra-abdominal approach involves a small peritoneal incision and fenestration of the renal tumor surface. Instead of traditional renal artery clamping, a Fogarty balloon catheter, guided by LUS, temporarily occludes arterial blood flow. Our unpublished clinical data of70 cases has confirmed its safety and a high technical success rate. To further advance intracorporeal and cutaneous minimally invasive goals, we developed the LESS-HPN technique. Initial results from 10 cases indicated that LESS-HPN was a safe procedure, with no complications or recurrence. The average EBL for LESS-HPN was 42 ml, significantly lower than the previously reported multi-port RAPN or R-LESS PN . The most promising feature of LESS-HPN was the synergy between R-LESS and UBo-HPN, which together result in favorable surgical outcomes. The mean operative time of 103 min was much shorter than the durations exceeding 300 min reported for both LESS-PN or multi-port RAPN . Additionally, the WIT for both the anterior and posterior groups was notably reduced to 21.8 and 20.2 min, respectively, which not only surpassed the reported LESS-PN duration of 26.5 min but also approached the WIT of multi-port PN at 20.2 min . Furthermore, our cohort exhibited a significantly lower postoperative decline in eGFR . Previous studies have reported high conversion rates for LESS, ranging from 7.9 to 14.2%, and the need for additional ports in 19–61.6% of LESS-PN procedures . Notably, 37% of cases were converted due to difficult dissection, and 25% due to bleeding . In contrast, our research suggested that LESS-HPN offered a viable solution to these challenges, as none of our cases required additional access or conversion to conventional surgery. Simplifying the intracorporeal procedure has significantly reduced the complexity of dissection, particularly in the renal hilar structures, thus minimizing the risk of vascular damage and bleeding. In essence, LESS-HPN renders LESS procedures less technically demanding and more minimally invasive . Another notable advantage of LESS-HPN was its ability to achieve highly selective branch renal artery occlusion in cases where the tumor was supplied by specific branches, as confirmed by CTA. Despite the variability in the renal vascular anatomy and tumor-specific blood supply, CTA imaging combined with LUS guidance allowed ultrasonographers to clearly identify the branches of the target artery supplying the tumor . This selective intravascular occlusion, as opposed to segmental renal artery clamping, mitigated the risk of dissecting higher-order tumor feeding arteries and associated vascular damage . Following selective branch artery occlusion, the partial ischemia of the kidney can be promptly confirmed by CDFI or CE-LUS . Our preliminary findings demonstrated that LESS-HPN is particularly effective for non-complex renal tumors, making it a preferable option for patients with a history of prior abdominal surgery and significant intra-abdominal adhesions . In such cases, LESS-HPN minimized intracorporeal manipulation, offering more pronounced minimally invasive advantages. Limitation LESS-HPN shows greater effectiveness for certain patients, including those with lower BMI . However, the classic flaws of LESS such as instrument collisions still presents. Additionally, there is necessarily a learning curve for LESS-HPN. Lastly, this study was limited by its small sample size. LESS-HPN shows greater effectiveness for certain patients, including those with lower BMI . However, the classic flaws of LESS such as instrument collisions still presents. Additionally, there is necessarily a learning curve for LESS-HPN. Lastly, this study was limited by its small sample size. Our first exploration with LESS-HPN indicated that it is a safe and feasible alternative to the treatment of renal tumors. LESS-HPN, a combination of R-LESS with UBo-HPN, achieve both cutaneous and intracorporeal minimally invasiveness. Controlled studies with larger size were expected in the future. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Person-centred maternity care during childbirth: a systematic review in low and middle-income countries | a4e41d0c-48ad-427e-a8c7-d42a52bdc78e | 11817757 | Surgical Procedures, Operative[mh] | In low- and middle-income countries (LMICs), institutional births have risen over the past two decades. However, evidence indicates that institutional births alone do not reduce maternal and newborn mortality without providing quality intrapartum care. Among different barriers to providing quality intrapartum care are non-compassionate and abusive care, which could lead to reduced rapport between women and healthcare providers and influence women’s future healthcare seeking. Evidence demonstrates that there is room for significant variation in assessing disrespectful and abusive care, and the findings are inconsistent. However, to examine the level of PCMC during childbirth without methodological variations, including different operational definitions, methodologies for generating summary measures, and variations of eligibility criteria like disrespectful and abusive care. To the best of our knowledge, no systematic review has been conducted to comprehensively assess the PCMC level in LMICs. Understanding the level of PCMC and its determinants could provide evidence to address this gap. We systematically reviewed all published articles addressing the level of PCMC during childbirth and found 11 studies and 12 articles comprising 8341 women in six LMICs. PCMC levels during childbirth were shown to be higher in women who were wealthier and more educated, had better ANC experiences, and had received both ANC and birthing care by the same healthcare providers (Midwife-led care). This finding demonstrates that low levels of PCMC occurred in the subdomain of communication and autonomy. Contextualised multifaceted interventions are needed to enhance the communication and autonomy subdomain during birth to improve PCMC. Person-Centred Maternity Care (PCMC) is essential for improving intrapartum and immediate postnatal care quality and could lead to increased institutional birth and lower maternal and neonatal mortality in low- and middle-income countries (LMICs) . Despite international efforts to minimise maternal and newborn mortality in LMICs , many of these countries continue to have high mortality rates, with three-quarters of maternal deaths occurring in the postpartum period . While increasing facility births has been considered a potential strategy for lowering maternal and neonatal mortality rates, it is also essential to focus on enhancing the quality of intrapartum care. This notion is supported by studies from India and the Dominican Republic revealed that maternal and newborn mortality rates did not show significant decreases despite the increased rates of facility births . PCMC refers to compassionate care responsive to each woman’s and her family’s preferences, needs, and values during childbirth . PCMC includes responsive and respectful maternity care and treating patients with respect and dignity, good communication with patients in all health-related decisions and maintaining continuity of care . PCMC has subdomains such as dignity and respect, autonomy, privacy and confidentiality, communication, social support, supportive care, predictability and transparency of payments, trust, stigma and discrimination, a healthy facility environment . PCMC can help with the provision of care in a timely, better provider-patient communication, increased treatment adherence, and better maternal and neonatal birth outcomes . In LMICs, PCMC could be influenced by various factors, including the type of health facility, women’s wealth and educational status, mode of birth, obstetric complications, and burnout among healthcare providers due to issues such as non-payment of salaries, workforce shortages, and equipment/supply deficits during childbirth. These factors have substantial contribution to the occurrence of disrespectful, abusive, abandoning, and neglectful care, which, in turn, may lead to poor PCMC . Additionally, the increasing medicalisation of the childbirth process without involving women in the process, which tends to compromise a woman’s capacity to give birth and negatively impact her childbirth experience and subsequent health-seeking behaviour, contributing to poor PCMC . Thus, poor quality of PCMC, directly and indirectly, contributes to high maternal and neonatal deaths . Evidence demonstrates that inconsistencies arise in the findings regarding disrespectful and abusive during births, resulting from differences in operational definitions, methodologies for generating summary measures, and eligibility criteria. This is exemplified by significant variations in reported findings across different countries: the lowest non-dignified care such as shouted at, insulted, and threatened women during and after labour was reported in Malawi at 1.91% , overall, the highest level of disrespectful and abusive care was reported in Nigeria at 98.0% , and 98.9% in Ethiopia . Unlike these wide disparities observed in disrespectful and abusive care, the findings of PCMC using quantitative methods show greater consistency and comparability across different settings . This systematic review aims to address this knowledge gap by investigating the level of PCMC in LMICs using studies that used the Afulani PCMC scale . This systematic review followed the 2020 Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) guidelines . The protocol was registered in the Prospero International Register of Systematic Reviews with ID: CRD42023426638. Information sources and search strategy We conducted a systematic review using peer-reviewed studies. Searches were performed on PubMed/Medline, Embase (Ovid), CINAHL, and Maternal and Infant Care (Ovid) up to 30 May 2023 and updated 26 April 2024. We used medical subject headings (MeSH) and indemnity key terms using MeSH analyser, including: “Adolescen*” OR “adult” OR “middle-aged” OR “young adult” OR “pregnant women” OR “mothers*” AND “Deliver*” OR “childbirth” OR “obstet*” OR “prenatal care” OR “parturition” OR “pregnan*” OR “maternal health services” OR “maternal health care” OR “individual-centred care” OR “patient participation” OR “patient-provider communications” AND “developing countries” OR “low and middle-income countries”. Additionally, we performed hand research or snowball technique using the references of studies included and relevant reviews to identify any further articles uncaptured in this search. Table is presented to show the terms/keywords used for this study. The 2020 Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) checklist was used to present the findings on PCMC in LMICs (Fig. ). Eligibility criteria The eligibility criteria for this review included: (i) cross-sectional, mixed-methods, and prospective cohort studies that measured PCMC using the Afulani PCMC Scale (to be consistent in identifying the level of PCMC across countries); (ii) studies conducted in LMICs, as per the World Bank 2023 classification ; (iii) studies conducted in the English language with no time restriction and study population is women. We excluded studies with the modelling of PCMC, personal commentary, review, notes, case studies, conference abstracts and proceedings, press releases, news articles, studies with incomplete information, studies with method problems such as small sample size or with full text not available and reflections and analyses of existing literature. If more than one outcome was reported from a single study each outcome was analysed independently. Study selection Searched articles were exported to Endnote 20, and any duplicates were removed within EndNote. Further screening was conducted by ZYK and AB using title, abstract, and full text. For each eligible study, the author’s name, year of publication, the study aim, country, study design, the PCMC score, and the predictors of PCMC were extracted. Disagreements between the two authors were settled by discussing or bringing additional authors when necessary. Data analysis A narrative synthesis was applied to present the findings. The full PCMC scale has 30 items , with a shorter 13-item version . The 30-item PCMC scale has three sub-scales for dignity and respect (6 items), communication and autonomy (9 items), and supportive care (15 items). Each question on the scale has four response options ranging from 0 to 3, thus generating a score ranging from 0 of 90 for the full 30-item scale, 0 to 39 for the 13-item short version, 0 to 18 for dignity and respect, 0 to 27 for communication and autonomy, and 0 to 45 for supportive care . Quality appraisal Quality appraisal was conducted using the JBI checklist that contains nine items: sample frame appropriate to address the target population, the study participants sampled appropriately, the adequacy of sample size, the study subjects and the settings described in detail, data analysis conducted with sufficient coverage of the identified sample, valid methods used for the identification of the conditions, the conditions measured in a standard way, reliable way for all participants, the appropriate statistical analysis, the response rate adequate, and if not, was low response rate managed appropriately. Based on the JBI prevalence checklist, the authors assessed the quality of eligible studies, and any disagreements were resolved through discussion and consensus. This systematic review included the studies with quality scores greater than 5 out of 9 total scores (Table ). We conducted a systematic review using peer-reviewed studies. Searches were performed on PubMed/Medline, Embase (Ovid), CINAHL, and Maternal and Infant Care (Ovid) up to 30 May 2023 and updated 26 April 2024. We used medical subject headings (MeSH) and indemnity key terms using MeSH analyser, including: “Adolescen*” OR “adult” OR “middle-aged” OR “young adult” OR “pregnant women” OR “mothers*” AND “Deliver*” OR “childbirth” OR “obstet*” OR “prenatal care” OR “parturition” OR “pregnan*” OR “maternal health services” OR “maternal health care” OR “individual-centred care” OR “patient participation” OR “patient-provider communications” AND “developing countries” OR “low and middle-income countries”. Additionally, we performed hand research or snowball technique using the references of studies included and relevant reviews to identify any further articles uncaptured in this search. Table is presented to show the terms/keywords used for this study. The 2020 Preferred Reporting Items for Systematic Review and Meta-Analysis (PRISMA) checklist was used to present the findings on PCMC in LMICs (Fig. ). The eligibility criteria for this review included: (i) cross-sectional, mixed-methods, and prospective cohort studies that measured PCMC using the Afulani PCMC Scale (to be consistent in identifying the level of PCMC across countries); (ii) studies conducted in LMICs, as per the World Bank 2023 classification ; (iii) studies conducted in the English language with no time restriction and study population is women. We excluded studies with the modelling of PCMC, personal commentary, review, notes, case studies, conference abstracts and proceedings, press releases, news articles, studies with incomplete information, studies with method problems such as small sample size or with full text not available and reflections and analyses of existing literature. If more than one outcome was reported from a single study each outcome was analysed independently. Searched articles were exported to Endnote 20, and any duplicates were removed within EndNote. Further screening was conducted by ZYK and AB using title, abstract, and full text. For each eligible study, the author’s name, year of publication, the study aim, country, study design, the PCMC score, and the predictors of PCMC were extracted. Disagreements between the two authors were settled by discussing or bringing additional authors when necessary. A narrative synthesis was applied to present the findings. The full PCMC scale has 30 items , with a shorter 13-item version . The 30-item PCMC scale has three sub-scales for dignity and respect (6 items), communication and autonomy (9 items), and supportive care (15 items). Each question on the scale has four response options ranging from 0 to 3, thus generating a score ranging from 0 of 90 for the full 30-item scale, 0 to 39 for the 13-item short version, 0 to 18 for dignity and respect, 0 to 27 for communication and autonomy, and 0 to 45 for supportive care . Quality appraisal was conducted using the JBI checklist that contains nine items: sample frame appropriate to address the target population, the study participants sampled appropriately, the adequacy of sample size, the study subjects and the settings described in detail, data analysis conducted with sufficient coverage of the identified sample, valid methods used for the identification of the conditions, the conditions measured in a standard way, reliable way for all participants, the appropriate statistical analysis, the response rate adequate, and if not, was low response rate managed appropriately. Based on the JBI prevalence checklist, the authors assessed the quality of eligible studies, and any disagreements were resolved through discussion and consensus. This systematic review included the studies with quality scores greater than 5 out of 9 total scores (Table ). A total of 888 studies were found in four databases and additional sources, of which 167 studies were removed due to duplications and 684 were excluded based on the title and abstract. A total of 37 studies were considered for full-text screening, and 25 were excluded for reasons such as failure to report PCMC, reporting findings in the abstract and main body were different, and different study population (healthcare providers) and PCMC scale validation. There are two articles within a single study, that were reported outcomes independently. Finally, 12 articles and eleven studies were included in this systematic review (Fig. ). Characteristics of included studies Six studies were conducted in Ethiopia , two studies and three articles in Kenya , one in Nigeria , one in Pakistan and one from three countries (Kenya, Ghana and India) . Level of person-centred maternity care (PCMC) We report the total PCMC score, as well as scores for the sub-scales of dignity and respect, communication and autonomy, and supportive care. Eleven studies showed the mean PCMC score during childbirth and factors associated with PCMC during childbirth. The lowest overall mean PCMC score, out of 39 was 19.07 in Ethiopia , and the highest was 24.2 (SD = 8.4) in Kenya out of 39 during childbirth at health facilities. Additionally, the lowest overall mean PCMC score, out of 90 was 46.5 (SD = 6.9) in rural Ghana , and the highest was 60.2 (SD = 12.3) out of 90 in urban Kenya during childbirth at health facilities (Table ). In nine studies, dignity and respect, communication and autonomy, and supportive care were reported (Table ). Types of PCMC domains This review assessed PCMC subscale such dignity and respect, communication and autonomy, and supportive care. Detailed findings are presented in (Table ) and the following sections. Dignity and respect during childbirth Care with respect and dignity is essential during childbirth. The challenge is maintaining the woman’s dignity while providing women with evidence-based maternity care that supports normalcy, wholeness, and safety for the woman and their newborn . In nine studies, dignity and respect were reported in six LMICs (Table ). In this review, the minimum score of dignity and respect was 9.95 (± SD = 3.20) out of 18 , while the maximum was reported as 15.7 (± SD = 2.15) (Table ). Communication and autonomy during Childbirth Essential aspects of PCMC during birthing are effective communication and respect for women’s autonomy . Effective communication and respect for women’s autonomy are essential for healthcare providers to interact with women and ask for their permission before performing any examinations or procedures, to explain to women the benefits of procedures and medications, or to foster an environment where women feel comfortable asking questions about their care during childbirth . In this review, the lowest mean communication and autonomy was reported at 8.3 (± SD = 3.3) out of 27 in Ghana , and the highest mean was reported at 15.87 (SD = 5.44) out of 24 in Nigeria and 15.5 (SD = 2.39) in Ethiopia out of 27 . Supportive care during childbirth Supportive care is vital during childbirth by providing emotional support, comfort measures, information, and advocating quality of care . It helps facilitate a positive childbirth experience that reduces fear, anxiety, and a resultant side effect . In this review, the mean lowest supportive care was reported at 24.6% (SD = 4.0) out of 45 in Ghana , and the highest was reported at 32.2% (± SD = 6.0) in India . Factors associated with PCMC The results indicated that several factors were associated with an increased mean of PCMC during childbirth at health facilities. These factors included women belonging to a high economic status, women who were literate , married women who delivered their babies at health centres or private health facilities . Conversely, certain factors were found to be associated with lower mean PCMC during childbirth. These factors included experiencing domestic violence , having no ANC follow-up or less than four ANC contacts , delivering babies at higher-level health facilities , women who received assistance during childbirth by unskilled attendants or auxiliary nurses, midwives , undergoing childbirth during night-time , encountering neonatal death , staying in health facilities for 2–7 days , and facing complications during childbirth (Table ). Six studies were conducted in Ethiopia , two studies and three articles in Kenya , one in Nigeria , one in Pakistan and one from three countries (Kenya, Ghana and India) . We report the total PCMC score, as well as scores for the sub-scales of dignity and respect, communication and autonomy, and supportive care. Eleven studies showed the mean PCMC score during childbirth and factors associated with PCMC during childbirth. The lowest overall mean PCMC score, out of 39 was 19.07 in Ethiopia , and the highest was 24.2 (SD = 8.4) in Kenya out of 39 during childbirth at health facilities. Additionally, the lowest overall mean PCMC score, out of 90 was 46.5 (SD = 6.9) in rural Ghana , and the highest was 60.2 (SD = 12.3) out of 90 in urban Kenya during childbirth at health facilities (Table ). In nine studies, dignity and respect, communication and autonomy, and supportive care were reported (Table ). This review assessed PCMC subscale such dignity and respect, communication and autonomy, and supportive care. Detailed findings are presented in (Table ) and the following sections. Dignity and respect during childbirth Care with respect and dignity is essential during childbirth. The challenge is maintaining the woman’s dignity while providing women with evidence-based maternity care that supports normalcy, wholeness, and safety for the woman and their newborn . In nine studies, dignity and respect were reported in six LMICs (Table ). In this review, the minimum score of dignity and respect was 9.95 (± SD = 3.20) out of 18 , while the maximum was reported as 15.7 (± SD = 2.15) (Table ). Communication and autonomy during Childbirth Essential aspects of PCMC during birthing are effective communication and respect for women’s autonomy . Effective communication and respect for women’s autonomy are essential for healthcare providers to interact with women and ask for their permission before performing any examinations or procedures, to explain to women the benefits of procedures and medications, or to foster an environment where women feel comfortable asking questions about their care during childbirth . In this review, the lowest mean communication and autonomy was reported at 8.3 (± SD = 3.3) out of 27 in Ghana , and the highest mean was reported at 15.87 (SD = 5.44) out of 24 in Nigeria and 15.5 (SD = 2.39) in Ethiopia out of 27 . Supportive care during childbirth Supportive care is vital during childbirth by providing emotional support, comfort measures, information, and advocating quality of care . It helps facilitate a positive childbirth experience that reduces fear, anxiety, and a resultant side effect . In this review, the mean lowest supportive care was reported at 24.6% (SD = 4.0) out of 45 in Ghana , and the highest was reported at 32.2% (± SD = 6.0) in India . Care with respect and dignity is essential during childbirth. The challenge is maintaining the woman’s dignity while providing women with evidence-based maternity care that supports normalcy, wholeness, and safety for the woman and their newborn . In nine studies, dignity and respect were reported in six LMICs (Table ). In this review, the minimum score of dignity and respect was 9.95 (± SD = 3.20) out of 18 , while the maximum was reported as 15.7 (± SD = 2.15) (Table ). Essential aspects of PCMC during birthing are effective communication and respect for women’s autonomy . Effective communication and respect for women’s autonomy are essential for healthcare providers to interact with women and ask for their permission before performing any examinations or procedures, to explain to women the benefits of procedures and medications, or to foster an environment where women feel comfortable asking questions about their care during childbirth . In this review, the lowest mean communication and autonomy was reported at 8.3 (± SD = 3.3) out of 27 in Ghana , and the highest mean was reported at 15.87 (SD = 5.44) out of 24 in Nigeria and 15.5 (SD = 2.39) in Ethiopia out of 27 . Supportive care is vital during childbirth by providing emotional support, comfort measures, information, and advocating quality of care . It helps facilitate a positive childbirth experience that reduces fear, anxiety, and a resultant side effect . In this review, the mean lowest supportive care was reported at 24.6% (SD = 4.0) out of 45 in Ghana , and the highest was reported at 32.2% (± SD = 6.0) in India . The results indicated that several factors were associated with an increased mean of PCMC during childbirth at health facilities. These factors included women belonging to a high economic status, women who were literate , married women who delivered their babies at health centres or private health facilities . Conversely, certain factors were found to be associated with lower mean PCMC during childbirth. These factors included experiencing domestic violence , having no ANC follow-up or less than four ANC contacts , delivering babies at higher-level health facilities , women who received assistance during childbirth by unskilled attendants or auxiliary nurses, midwives , undergoing childbirth during night-time , encountering neonatal death , staying in health facilities for 2–7 days , and facing complications during childbirth (Table ). This systematic review synthesised the level of PCMC based on studies using the PCMC scale during childbirth in six LMICs . The lowest mean PCMC was 46.5 in rural Ghana , and the highest was 60.2 in Kenya out of 90. In this review, the lowest subscale domain of PCMC was identified as communication and autonomy in Ghana . The low level of communication and autonomy could affect the rapport between healthcare providers and women regarding decision-making and the performance of any procedures, for example, physical examinations, which can decrease the quality of care and be detrimental to women’s future health-seeking. In addition, healthcare providers could get training on communication and autonomy of women and reduce burnout during childbirth to improve effective communication . Additionally, healthcare providers could allow women to make all care-related decisions during childbirth. The lowest dignity and respect subscale of PCMC score was in Nigeria . A low level of dignity and respect for women during childbirth could lead to women and families developing a negative attitude toward maternity care and lessen the quality of care. Similarly, healthcare providers should get training and supervision on women’s rights, dignity, and respect during intrapartum care to improve the quality of care that promotes healthy birth experiences . Furthermore, the lowest supportive care score was reported in Ghana . Improving the lowest level of supportive care during childbirth could be achieved in ways such as allowing companionship during childbirth, emotional support, making the labour ward a conducive environment , and lessening healthcare providers’ burnout. Owing to this, supportive care is crucial for promoting a positive childbirth experience, lowering women’s and families’ fear and anxiety, and minimising any adverse effects that may arise during childbirth . Women who are wealthier, employed, literate, married, and who gave birth in a health centre or private facility were more likely to get better PCMC. This could be because it is easier for women with higher socioeconomic positions to gain rapport with healthcare providers and get a choice of preferred PCMC. Additionally, women who had ANC and birthing care by the same healthcare providers . This could be because women tend to build cohesiveness with healthcare providers, as they often have more contact with them and are more likely to establish rapport. The finding aligns with a study that demonstrates that women who received midwife-led continuity model care were less likely to require interventions and more likely to be satisfied with the care during childbirth . Women who experienced domestic violence, gave birth in higher-level health facilities , assisted by an auxiliary nurse, midwife, or assisted by unskilled attendant , booked ANC in the second and third trimesters , had no ANC follow-up or follow-up of less than four contacts , stayed 2–7 days in the facilities , experienced complications during childbirth and neonatal deaths were less likely to get better PCMC. These findings align with a study indicating that low socioeconomic status significantly increased domestic violence , which could decrease PCMC during childbirth. Moreover, a study reveals that women who experienced midwife-led continuity models of care had lower intervention rates and more satisfaction with receiving care and could increase PCMC during childbirth , while women who experienced care from an auxiliary nurse, midwife, or unskilled attendant significantly decreased the PCMC during childbirth. Women who had experienced complications and neonatal deaths that could lead to psychological trauma could not effectively communicate with healthcare providers, resulting in lower PCMC. Therefore, the implications of the overall findings show a low level of PCMC between women and healthcare providers and offer a strategy to improve PCMC during childbirth in healthcare facilities in LMICs. This study has limitations. First, our review is based on findings from only six countries, which limits the generalizability of the results to all LMICs. Second, the use of self-reported data on women’s experiences during childbirth introduces potential biases, such as social desirability and recall bias. Women may not accurately recall all their experiences or may hesitate to report negative experiences with care. In exit interviews conducted in health facilities, women might avoid disclosing negative experiences due to concerns about how their future care could be affected. Additionally, research has shown that women are less likely to report negative experiences immediately after childbirth, as the joy of having a baby can influence their responses . Despite these limitations, this review provides valuable insights into the level of PCMC during childbirth in health facilities across LMICs. To the best of our knowledge, it is also the first comprehensive analysis of available data on this topic in LMICs. Our findings indicated that the communication and autonomy components of PCMC are notably low, affecting the rapport between healthcare providers and women, as well as decision-making and the execution of procedures. Targeted interventions are especially necessary for women who did not receive antenatal care, had extended stays at health facilities, or experienced complications during childbirth. Improving PCMC requires continuity of care through antenatal and intrapartum care provided by the same healthcare providers, alongside fostering a supportive environment for both women and healthcare providers during childbirth. Furthermore, providing training for healthcare providers on effective communication, promoting women’s autonomy, and respecting their dignity is crucial for ensuring the consistent delivery of high-quality PCMC. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Relativity, Rank, and the US News Health’s Cardiology, Heart, and Vascular Surgery Best Hospitals | 2b4faf1a-9657-44d0-abc0-0a452c672051 | 11648326 | Internal Medicine[mh] | The basic idea of evaluating hospital performance has long garnered immense interest. For a variety of pecuniary and nonpecuniary reasons, the general objective was to compare hospitals according to some aggregated dimensions. After all, health care is a rare purchase and accordingly, patients often seek prestigious hospitals. Consequently, rankings have become an important vein for annual patient visits, reimbursement rates, and funding allocation. Invariably, and not surprisingly, heart surgery and cardiology rankings are an attractive concept, from patients to clinicians to hospital administration. There are decades-old challenges over the utility of the various metrics used in rankings. So it is worth exploring the clinical indicators in these hospital ranking systems. Quality measures should ideally have a clear and causal explanation. Generally, one of the most challenging aspects of quality measurement is obtaining quantitative data. The purpose of the present article was to examine critically the most popularized existing hospital ranking system, that of the US News and World Report (USNWR or US News), to assess its validity and to derive insights about the top cardiology and heart surgery programs. Utilizing the USNWR 2022–2023 Cardiology, Heart, and Vascular Surgery Best Hospital rankings, 16 variables were analyzed to determine the top 50 hospitals. In this analysis, the authors divided the overall group (n = 50) into quintiles; each cohort comprising 10 hospitals. The 16 critical categories included in this evaluation encompassed diverse aspects, such as US News specialty score, patient experience, public transparency, 3-day survival, discharging patients, intensives, Society of Thoracic Surgery (STS) transparency, American College of Cardiology (ACC) transparency, advanced technologies, patient services, trauma center, recognized as magnet hospital, current American Hospital Association (AHA) responder, number of patients, nurse staffing, and expert opinion. Guided by the USNWR, the methodology underlying these rankings integrated 3 fundamental components: structure, process, and outcomes. This approach was reflected in the assignment of critical percentages to the overall ranking distributing importance as follows: outcomes (37.5%), structure (30%), process/expert opinion (24.5%), patient experience (5%), and public transparency (3%). An analysis of variance (ANOVA) was used to compare continuous variant scores, and a χ 2 test was used to compare the categorical variant scores among the 5 quintile groups to one another. A significant threshold was deemed to be P < 0.05 and was applied to each of the 16 categories and quintiles to ensure a reliable evaluation of this report. In the second phase of the analysis, the 50 hospitals, subdivided into quintiles, underwent a comprehensive division process using data sourced from the Centers for Medicare and Medicaid Services (CMS) specifically focusing on heart attack, heart failure, and stroke death rates. These metrics were quantified as percentages and categorized into the same 5 groups previously established in the USNWR rankings. Mean, median, and range values were calculated for each variable. Additionally, an overall average death rate encompassing all 3 variables was derived for each of the individual hospitals. Due to data availability constraints, 2 hospitals were excluded from the heart attack death rate category, whereas only one hospital was excluded from the heart failure and stroke death rate categories. Following the compilation of average death rates for all hospitals, the groupings were restructured based on percentiles from lowest to highest facilitating a direct comparison with the original USNWR rankings. A total of 50 hospitals stratified into 16 distinct categories were compared via an ANOVA/χ 2 comprehensive statistical analysis along with a Wilcoxon/Kruskal–Wallis test. Statistical significance was established at a P value of < 0.05, revealing notable distinctions in 4 of the 16 categories examined. This project was exempted from IRB approval by Temple University School of Medicine. Significant variations were seen in advanced technologies ( P = 0.05), US News specialty score ( P < 0.001), number of patient referrals ( P = 0.004), and expert opinion ( P < 0.001). Conversely, 7 categories exhibited no statistically significant differences among the quintiles encompassing patient experience ( P = 0.65), public transparency ( P = 0.54), STS transparency ( P = 0.54), ACC transparency ( P = 0.40), trauma center ( P = 0.20), recognized as magnet hospital ( P = 0.43), and nurse staffing ( P = 0.53) . Furthermore, 4 categories displayed uniform responses across all 50 hospitals and individual quintiles in the study. These categories included current AHA responder, intensives, discharging patients, and 30-day survival. Notably, the most significant variance observed pertained to the total number of referred patients between hospitals. In the first quintile, 13,371 referred patients were accounted for, whereas in the last quintile, there was only a total of 6690. This shares a substantial divergence in overall patient referrals of the hospital spectrum just within these 5 groups. This critical analysis of significant findings that distinguish the top 50 hospitals in this study revolved around key factors focusing specifically on expert opinion and patient volume. Expert opinion, a pivotal criterion, targets an institution’s ability to deliver exceptional care to challenging cases. This aspect constitutes the process component that holds substantial weight in the overall institutional ranking at 24.5%. The expert opinion as explained by the USNWR is deemed by nomination from board-certified specialists. The rankings for the year 2022–2023 were a collective analysis from the preceding 3 years of physician surveys and equal weight is assigned to each year in determining the final score for expert opinion. Regarding the structural element, the consideration of patient volume carries a weight of 6.67% within the overall ranking and constitutes 30% of the entire hospital evaluation. Structure specifically assesses hospital resources that directly impact overall patient care and the hospital environment. Lastly, patient volume is derived from medical and surgical discharges in the cardiology and heart surgery grouping based on specific Medicare Severity Diagnosis Related Group submissions for CMS reimbursement. The secondary analysis of CMS data shown in Table 2 employed both ANOVA and Kruskal–Wallis testing, with a significance level set at P < 0.05. Heart attack, heart failure, stroke, and average death rates were subjected to statistical examination, categorized both by quintiles and overall grouping (total n = 50, with each quintile n = 10). For heart attack death rate, the ANOVA yielded a highly significant value of < 0.001, indicating a notable difference among quintiles. However, heart failure ( P = 0.086) and stroke death rates ( P = 0.22) did not exhibit significant differences among quintiles based on ANOVA analysis. Conversely, when considering the average death rate across all 3 categories, a significant P value of 0.0003 was obtained, indicating discernible distinctions among quintiles of the top 50 hospitals, particularly concerning death rates from heart attack, heart failure, and stroke. The highest death rate was found in the 4th group, namely, hospitals ranked 31–40 with USNWR, compared to the other groups . Subsequently, by utilizing average percentages, the top 50 hospitals were reorganized by the variable that encompassed all 3 death rates (heart failure, heart attack, and stroke) to ascertain any disparities compared with the top quintile (1–10) of hospitals as per USNWR rankings. This is shown in . The analysis revealed a restructured ranking order of (5, 3, 8, 2, 4, 25, 29, 13, 24, and 29), suggesting that post-adjustment, only 5 out of the top 10 hospitals remained within the first quintile. In the current climate of performance and competition, hospitals are ranked by external metrics to measure quality. It is widely acknowledged that the USNWR hospital ranking is an important ingredient in choosing a hospital. Health care providers and patients have embraced the USNWR hospital rankings as the unanimous best per se. Traditionally, hospital ranking is conceptualized as having a causal impact on patient referrals. Each year, USNWR rates and publishes a list of the top 50 US hospitals and different specialties. There are 3 dimensions that capture the quality of care: structure (staff, equipment, and environment), process (interactions between patients and the health care system for diagnosis, treatment, and experience), and outcomes. Each dimension is complementary. On theoretical grounds, the 3 variables can be traced back to Donabedian’s model. Of the top 50 hospitals, the top 20 at the pinnacle are recognized as honor-roll hospitals. Beginning with graduate education, dissenting views have shaken the tenet of the USNWR rankings. The authors know surprisingly little of the drivers for top hospital rankings. This does not mean, however, that rankings are not based on rigor. Growing discussion has called into question the top 50 USNWR cardiology, heart, and vascular surgery hospitals, most notably illuminating the distinction between honor- vs non–honor-roll hospitals. From this perspective, this paper explores the linkage of the USNWR top 50 cardiology and cardiac and vascular surgery hospitals. Accordingly, the authors compared the variables used by USNWR to rank the top 50 hospitals. The impact of USNWR rankings for undergraduate and graduate schools has been the center of recent controversy. Yale Law School withdrew from the USNWR ranking, despite being ranked first for many years. Afterward, Harvard, Stanford, Georgetown, Columbia, and Berkeley followed. A parallel and emerging trend occurred in January 2023, when 9 of the nation’s top-ranked medical schools announced that they would no longer participate in the USNWR ranking system. The schools cited philosophical differences with the system’s emphasis on factors such as standardized test scores. Moreover, George Q Daley, Dean of Harvard Medical School, wrote on January 17, 2023: “My concerns and the perspectives I have heard from others are more philosophical than methodological, and rest on the principled belief that rankings cannot meaningfully reflect the high aspirations for educational excellence, graduate preparedness, and compassionate and equitable patient care that the authors strive to foster in their medical education programs.” Moreover, Columbia University Vagelos College of Physicians and Surgeons’ Dean Katrina Armstrong made the case that rankings “perpetuate a narrow and elitist perspective on medical education.” Another sector where this debate is extremely relevant is hospital rankings. Empirical, as well as theoretical, researchers have argued and supported that top hospitals position themselves for increased publicity and marketing. The negative effect stems from the perception that rankings are used to promote name recognition, patient referrals and volume, third-party payments, and financial rewards. This is based on the idea that ranked hospitals cite the USNWR ratings in most (61%) of their direct-to-patient advertising. Of the top 20 USNWR hospitals, 100% advertise their ranking on their website, 70% on the primary landing page. In fact, the top 50 hospitals are displayed on the USNWR’s website and advertised to the public. In 2006, Williams et al examined 744 US hospitals for cardiology and cardiac surgery. The authors found that only 23 of the top 50 hospitals demonstrated better-than-average performance, and 9 of the top 50 demonstrated significantly worse performance. Having said this, Wang et al compared 30-day mortality and readmission rates for patients 65 years or older who were hospitalized for acute myocardial infarction, heart failure, and coronary artery bypass grafting. Top-ranked hospitals had lower 30-day mortality, but similar or higher readmission rates. Another contribution from Mehta in 2019 showed the relationship between USNWR hospital rankings and actual outcomes for major cancers. Although the authors of that paper found lower mortality at the top 50 USNWR hospitals vs non–top-ranked hospitals, hospitals within the top 50 USNWR rankings had comparable outcomes. None of the 3 studies provided clear-cut results. Debates continue as some observers have concerns with USNWR hospital rankings. Identifying USNWR methodology issues are important from several perspectives, which are not limited to the following. First, the out-of-date data used in the rankings may not represent the entire patient population. In 2022, the “procedure and conditions” metrics accounted for a large part of the rank, but some of the procedural data was 7 years old at that time. Moreover, the disproportionate number of Medicare patients used in rankings does not reflect the whole population. Second, “expert opinion” relies on random clinicians to rank hospitals in which they presumably have direct knowledge of the care in the hospital. Nearly 85% of the hospital process dimension is expert opinion, and nearly 15% is survey accounts. Mortality permeates USNWR hospital rankings and is 35% of a hospital’s score. Mortality encompasses Elixhauser comorbidities and demographics and adjusts for differences in case mix between hospitals. However, Shahian et al have shown that, due to flaws in methodology and variability in assessing comorbidities accurately, hospital mortality rates do not predict the quality of care delivered. It is interesting, furthermore, that deaths can be attributed to a specialty despite patients not being cared for by that specialty. Although it has been long recognized that USNWR is the most commonly referenced ranking system, the question remains: What factors matter for USNWR? The results show that each hospital’s expert opinion score is crucial. In light of the fact of the critical role of expert opinion, a central question is: Is there a better ranking system than USNWR? Having said that, further research should be aimed at discriminating clinical and financial factors. For instance, incorporating clinical factors and fiscally efficient health care delivery. Top law and medical schools withdrawing from the USNWR rankings have ignited a great deal of interest in investigating hospital rankings. Comparing the top 50 hospitals for cardiology, heart, and vascular surgery is certainly a vantage point for USNWR methodology. Results mostly confirm that the expert opinion variable plays a critical role in the reputation of the top 10 hospitals. Concerning patient referrals, the authors underline that higher-tier hospitals receive more referrals than lower-tier hospitals. Overall, empirical evidence does not support significant differences among the top 50 hospitals regarding patient management, nurse staffing, patient experience, and overall satisfaction. The explanations for this are a range of possible answers that need further inquiry. Although many have argued about the merits of USNWR hospital rankings, taken together, rankings fill a strong customer demand and are sticky. |
Pediatric Resident Insulin Management Education (PRIME): A Single-Session Workshop Emphasizing Active Learning | ccc339c1-56d2-4dbc-acc3-1d7f94a1771a | 9941370 | Pediatrics[mh] | By the end of this activity, learners will be able to: 1. Differentiate between types of diabetes. 2. Describe the difference between basal and bolus insulin. 3. Recognize patient characteristics that alter insulin need. 4. Create a safe subcutaneous insulin plan. 5. Describe the symptoms of hypoglycemia and how to treat hypoglycemia. Insulin is a high-risk medication with potential for significant patient morbidity and mortality. This is especially true in children, and insulin ranks as the third most common cause of medication errors in hospitalized children within the United States. While the American Diabetes Association recommends that children with diabetes receive care from pediatric endocrinologists, the relative shortage of pediatric endocrinologists has led to an increasing need for generalists to be prepared to care for children with insulin-dependent diabetes. The American Board of Pediatrics therefore recommends that all board-certified pediatricians be able to develop an insulin management plan for patients with diabetes. Despite this, resident confidence and perceived competence in insulin prescribing are lacking. , A practical and easily integrated insulin curriculum is needed to increase pediatric resident confidence and competence in prescribing this critical medication. The few existing studies exploring trainee knowledge and comfort with insulin management have demonstrated gaps in trainee insulin management knowledge and perceived competence. A study of more than 2,000 recently graduated medical trainees within the United Kingdom showed that trainees overall lacked confidence in insulin management. In the United States, a study of internal medicine, family practice, and surgery trainees at one institution showed a significant gap in diabetes knowledge. Within our own institution, a focused needs assessment performed 4 months prior to implementing this workshop showed that only 32% of learners either somewhat agreed or strongly agreed they would be able to create a new subcutaneous insulin plan. Additionally, only 44% and 16% somewhat agreed or strongly agreed they would be able to calculate an insulin dose and adjust an insulin regimen, respectively. Although several curricula have been designed to target insulin management in pediatric patients not in diabetic ketoacidosis, most rely on resident self-directed learning to either receive the training or effectively engage during education sessions, – are geared towards residents who choose to enroll in an endocrine elective, or require multiple teaching sessions to administer. , It is encouraging that one study showed that pediatric resident insulin teaching can lead to improved inpatient glucose management; however, this success occurred with an 8-week intervention, which may not be practical during residency training. In addition, reliance on self-directed learning without protected time may be less effective for learners given the demands of residency training. A previous study conducted at the University of California, San Francisco, in 2018 showed improved satisfaction and reported comfort with insulin management in pediatric residents who were given protected time for didactic insulin teaching. However, our subsequent focused needs assessment demonstrated that further research was needed to identify instructional methods to teach residents insulin management efficiently and effectively. Pediatric Resident Insulin Management Education (PRIME) aimed to improve insulin-management self-efficacy and knowledge among new pediatric interns during a single-session workshop. The workshop promoted learner engagement by using active learning strategies, including case-based team learning, peer teaching, and peer coaching. , We additionally utilized instructor coaching and role-modeling to provide learner scaffolding and encourage learner reflection. To our knowledge, a single-session insulin workshop promoting active learning with insulin management has not been previously published. Setting and Participants PRIME targeted University of California, San Francisco, pediatric interns during their first month of training (July/August 2020). The workshop was delivered at an academic half day, during which interns were released from service obligations for focused educational sessions. These half days were developed by the pediatric residency to incorporate core teaching on chosen topics deemed to be important for residents to be exposed to within the first months of residency. We did not ask learners to do prework prior to our session. To prevent service disruptions, PRIME was offered twice, with 50% of the intern class attending each identical session. The University of California, San Francisco, Institutional Review Board deemed the educational session exempt from review. Instructional Strategy and Implementation The workshop promoted learner engagement by utilizing case-based team learning, peer teaching, and peer coaching. Pediatric endocrinology fellows and attendings, as well as the residency's associate program director of curriculum, developed the workshop, which consisted of three main components: (1) a brief overview didactic session; (2) small-group, case-based learning; and (3) peer teaching. Subject matter experts and the associate program director developed relevant learning objectives using the American Board of Pediatrics general content outline and a targeted needs assessment distributed in April 2020. Due to the COVID-19 pandemic and the need for physical distancing, each 2020 PRIME session was conducted in person in two separate classrooms connected via videoconferencing software. Learners were randomly assigned to one of two classrooms. Following introductions, the workshop began with a 35-minute presentation delivered to the large group. The presentation briefly reviewed diabetes physiology, types of insulin, and the creation of a subcutaneous insulin plan . During the didactic, the instructor role-modeled insulin calculation by explaining their thought process for a specific case scenario. Following the didactic session, we randomly assigned learners to groups of three to four individuals. Each group was assigned one of three cases and instructed to create a safe insulin plan. We developed these cases, which were modeled after patients commonly seen in our hospital. After successful completion of the first case, we provided each group with a more difficult one (the challenge question) that expanded upon topics reviewed during the didactic session. Learners were able to use notes and resources throughout the group activities. One third-year pediatric endocrinology fellow was present in each classroom to coach each group through the case by answering questions, giving feedback, and asking questions to promote reflection. Learners additionally received a calculation handout to guide their approach to problem-solving. After all groups had completed both their cases, they rejoined the larger class. At that time, each group presented its patient case and suggested insulin plan. Groups also presented their challenge question and explained their thought process to their peers. During these presentations, the nonpresenting participants were encouraged to comment and ask questions. The facilitators were present throughout the process to help direct the peer teaching and make corrections as needed. Facilitators One third-year pediatric endocrinology fellow with advanced training in medical education delivered the didactic lecture for both PRIME sessions. An additional third-year pediatric endocrinology fellow was present at each of the two 2020 PRIME sessions to assist with the small-group activities and ensure that there was one facilitator in each room. We developed an instructor guide that detailed the workshop schedule and the small-group cases and explained administration of the group cases. In addition, for each patient case, the instructor guide listed objectives, including those for the challenge question. We distributed the guide to the instructors prior to the session to help them prepare for the workshop. Instructors were not required to attend a training session prior to the workshop. Assessment We developed a survey to assess the effectiveness of the workshop in achieving the objectives. The survey was administered at the start of the workshop (pretest) and following the conclusion of the final exercise (posttest). The survey included questions, rated by learners on a 5-point Likert scale (1 = poor, 5 = outstanding ), to assess learner ability to perform specific tasks with insulin management. The primary outcome was self-efficacy, defined as an individual's confidence in the ability to perform a specific task in a given domain. The content-based questions were reviewed by faculty and fellows from the Division of Pediatric Endocrinology at the University of California, San Francisco. In addition to this survey, we gave learners the opportunity to evaluate the learning session in a separate evaluation. Data Analysis We used Stata 16.0 (StataCorp) for data analysis and assigned numerical scores to Likert-scale questions. We used tests of proportions—the number of learners who got the question correct out of the total number of responses—to compare the pre- and posttest results for the entire cohort. PRIME targeted University of California, San Francisco, pediatric interns during their first month of training (July/August 2020). The workshop was delivered at an academic half day, during which interns were released from service obligations for focused educational sessions. These half days were developed by the pediatric residency to incorporate core teaching on chosen topics deemed to be important for residents to be exposed to within the first months of residency. We did not ask learners to do prework prior to our session. To prevent service disruptions, PRIME was offered twice, with 50% of the intern class attending each identical session. The University of California, San Francisco, Institutional Review Board deemed the educational session exempt from review. The workshop promoted learner engagement by utilizing case-based team learning, peer teaching, and peer coaching. Pediatric endocrinology fellows and attendings, as well as the residency's associate program director of curriculum, developed the workshop, which consisted of three main components: (1) a brief overview didactic session; (2) small-group, case-based learning; and (3) peer teaching. Subject matter experts and the associate program director developed relevant learning objectives using the American Board of Pediatrics general content outline and a targeted needs assessment distributed in April 2020. Due to the COVID-19 pandemic and the need for physical distancing, each 2020 PRIME session was conducted in person in two separate classrooms connected via videoconferencing software. Learners were randomly assigned to one of two classrooms. Following introductions, the workshop began with a 35-minute presentation delivered to the large group. The presentation briefly reviewed diabetes physiology, types of insulin, and the creation of a subcutaneous insulin plan . During the didactic, the instructor role-modeled insulin calculation by explaining their thought process for a specific case scenario. Following the didactic session, we randomly assigned learners to groups of three to four individuals. Each group was assigned one of three cases and instructed to create a safe insulin plan. We developed these cases, which were modeled after patients commonly seen in our hospital. After successful completion of the first case, we provided each group with a more difficult one (the challenge question) that expanded upon topics reviewed during the didactic session. Learners were able to use notes and resources throughout the group activities. One third-year pediatric endocrinology fellow was present in each classroom to coach each group through the case by answering questions, giving feedback, and asking questions to promote reflection. Learners additionally received a calculation handout to guide their approach to problem-solving. After all groups had completed both their cases, they rejoined the larger class. At that time, each group presented its patient case and suggested insulin plan. Groups also presented their challenge question and explained their thought process to their peers. During these presentations, the nonpresenting participants were encouraged to comment and ask questions. The facilitators were present throughout the process to help direct the peer teaching and make corrections as needed. One third-year pediatric endocrinology fellow with advanced training in medical education delivered the didactic lecture for both PRIME sessions. An additional third-year pediatric endocrinology fellow was present at each of the two 2020 PRIME sessions to assist with the small-group activities and ensure that there was one facilitator in each room. We developed an instructor guide that detailed the workshop schedule and the small-group cases and explained administration of the group cases. In addition, for each patient case, the instructor guide listed objectives, including those for the challenge question. We distributed the guide to the instructors prior to the session to help them prepare for the workshop. Instructors were not required to attend a training session prior to the workshop. We developed a survey to assess the effectiveness of the workshop in achieving the objectives. The survey was administered at the start of the workshop (pretest) and following the conclusion of the final exercise (posttest). The survey included questions, rated by learners on a 5-point Likert scale (1 = poor, 5 = outstanding ), to assess learner ability to perform specific tasks with insulin management. The primary outcome was self-efficacy, defined as an individual's confidence in the ability to perform a specific task in a given domain. The content-based questions were reviewed by faculty and fellows from the Division of Pediatric Endocrinology at the University of California, San Francisco. In addition to this survey, we gave learners the opportunity to evaluate the learning session in a separate evaluation. We used Stata 16.0 (StataCorp) for data analysis and assigned numerical scores to Likert-scale questions. We used tests of proportions—the number of learners who got the question correct out of the total number of responses—to compare the pre- and posttest results for the entire cohort. Of the 28 residents who completed PRIME, 25 (89%) finished both the pre- and postworkshop surveys. When measuring self-efficacy, there was an increase in the median score for perceived ability to create a new subcutaneous insulin plan after course completion. Out of 16 knowledge questions, the mean percentage correct increased from 67% preintervention to 91% postintervention. There was a statistically significant increase in the proportion of participants who correctly answered questions assessing knowledge of insulin need while fasting ( p = .01) and steroid-induced diabetes physiology ( p = .01). In addition, there was a statistically significant increase in the proportion of learners who correctly identified fast-acting insulin ( p = .01), long-acting insulin ( p = .01), insulin used for infusions ( p = .001), and insulin used for insulin pumps ( p = .01). Given a clinical case, there was a statistically significant increase in the proportion of learners who correctly calculated the total daily dose of insulin ( p = .001), basal insulin ( p = .001), insulin sensitivity factor ( p = .02), and insulin-to-carbohydrate ratio ( p = .006; ). Learners were highly satisfied with the course, with a mean overall conference quality rating of 4.8 ( SD = 0.4) based on a 5-point Likert scale (1 = poor, 5 = outstanding ). PRIME utilized active learning strategies such as case-based team learning, peer teaching, and peer coaching to teach insulin management to new pediatric interns. The content was delivered as a discrete session that did not require learner self-directed preparation. The session was well received by residents and resulted in improvement in both self-efficacy and knowledge about insulin management. Although this workshop was administered to pediatric interns within the first 2 months of starting residency, it could be offered to all years of residency and medical students. The workshop was purposefully designed to be limited to one residency class to encourage team building amongst a new residency class. In addition, we felt that cohorting class years would encourage open participation and limit intimidation that might have occurred if other class levels were present. Our intervention to deliver insulin education was more forgiving of residents’ schedules compared to those previously described. Despite this, our workshop was shown to have similar improvements in self-efficacy and knowledge as other curricula. , As a single 90-minute session, the workshop can be integrated into either an intern orientation or a clinical rotation. It could also be divided into two shorter sessions: a didactic session and then a case-based session. However, further research would be needed to determine the efficacy of such a divided format. In contrast to other described insulin curricula, the workshop did not require any learner prework or self-directed learning, both of which could be challenging during busy clinical rotations. Medical education has shifted away from the classic pedagogy of an active teacher presentation to passive students. The contemporary student-centered approach prioritizes active learning and improves learner retention. Unfortunately, learners often rate active learning sessions less positively than passive sessions, which is thought to be due to the increased cognitive effort required during active learning. In contrast, we successfully utilized active learning techniques in the PRIME sessions and showed improvement in self-efficacy scores while maintaining high learner satisfaction. We acknowledge potential limitations to this workshop. It was implemented within a single residency program with only 28 participants, which could limit the generalizability of our findings. From an operational standpoint, not all residency programs may have the ability to provide protected time to pediatric residents or the facilitators needed to facilitate small-group work. We developed surveys using content experts but did not perform additional validity or reliability testing. Further work is needed to demonstrate retention of knowledge and feelings of self-efficacy. It is important to acknowledge that this project has not established causality between our chosen teaching methods and the improvement in self-efficacy and knowledge scores. A randomized control study is needed to compare self-efficacy and knowledge in learners exposed to an active learning session versus a passive learning session. In addition, we ultimately hope to demonstrate that educational interventions employing active learning have positive impacts on patient care as measured by glucose control (i.e., episodes and duration of hyperglycemia and hypoglycemia) in hospitalized patients using insulin and by reduction in medication errors. In summary, PRIME is a single-session workshop, easily integrated into learners’ schedules, that uses an active learning approach to successfully teach insulin management to pediatric interns. With local adaptation to specific program needs, these techniques can be easily transferred to other settings and curricula. PRIME Presentation.pptx Learner Cases.docx Calculation Handout.docx Instructor Guide.docx Learner Survey.docx All appendices are peer reviewed as integral parts of the Original Publication. |
Deprivation and NHS General Ophthalmic Service sight testing activity in England in 2022–2023 | 61d6ede6-99a8-4c53-ad42-2b7b8a8f7e08 | 11629842 | Ophthalmology[mh] | The association between socioeconomic deprivation and eye disease has long been recognised globally, with a systematic review identifying recurring factors, including low income and low educational attainment being associated with an increased incidence of sight‐threatening conditions. In the United Kingdom, research shows that socioeconomic deprivation continues to be associated with more advanced eye disease. National Health Service (NHS) sight‐tests under the General Ophthalmic Services (GOS) by optometrists in primary care as well as privately funded eye examinations remain the underpinning eyecare services via which most eye disease is detected and referred into secondary care ophthalmology. Primary care optometrists also provide wider NHS services, either as part of local commissioning, for example, Integrated Care Board commissioned enhanced or extended services in England, or within enhanced GOS models elsewhere. Small‐area analysis of GOS activity in Leeds found populations in the least deprived areas were more likely to have NHS funded sight‐tests than those in more deprived areas, with small‐area data modelling in Essex reinforcing the need to address inequalities. The overall number of NHS sight‐tests undertaken in England was ~9.16, 12.77 and 12.94 million for the years 2020–2021, 2021–2022 and 2022–2023, respectively, with the first of these periods being impacted by the COVID‐19 pandemic restrictions, resulting in fewer sight‐tests compared to subsequent years. We have recently published an analysis revealing substantial variations in the crude rate of optometric practices per 100,000 of population in England, with this rate being much lower in the more deprived versus more affluent areas, with some evidence for increased practice closures in the most deprived areas. However, updated evidence was not furnished around NHS sight‐testing activity for these GOS contractors. Given the earlier work of Shickle et al. , and with inequity of access to primary care optometry services seemingly being a significant unresolved concern in eyecare, we wanted to explore more recent post‐pandemic NHS sight‐testing activity in the GOS. The purpose of this paper is to explore sight‐testing activity per decile of deprivation for GOS contractor practices in England during 2022–2023, in the context of providing updated evidence for potential developments in eyecare aimed at reducing inequality. Limited GOS sight testing data are freely available within the public domain and our initial data access request for more detailed GOS sight‐testing information was made to NHS Business Services Authority (NHSBSA) who referred the query to Primary Care Support England (PCSE). PCSE were advised by NHS England (NHSE) that due to there being no formal process in place to access these data, it would need to be the subject of an NHSE Freedom of Information (FoI) request. An initial FoI request was made in February 2024, with clarification requested in March 2024. GOS sight‐testing data were requested, including the financial year 2022–2023, for the following: the number of sight‐tests accepted for payment in England by GOS contractors; provision of GOS contractors' unique Organisation Data Service (ODS) codes and postcodes and numbers accessing sight‐testing from age categories including children, working age adults and adults ≥65 years of age. Data were made available for analyses by NHSE in Microsoft Excel ( Microsoft.com ) file format. There were no information governance restrictions by NHSE England as no patient identifiable data were provided, as defined within the Health and Social Care Act 2012. A deprivation score was assigned to the location of each contractor practice using the Index of Multiple Deprivation (IMD) 2019. The average number of sight‐tests for contractors was calculated within each deprivation decile. Next, the rate of sight‐tests per 1000 population was calculated for each IMD decile using Office of National Statistics (ONS) Lower Layer Super Output Area (LSOA) mid‐year population estimates. Measuring inequality has complexity, and a number of options were considered, using the Technical Guide from the Office for Health Improvement and Disparities and the Scottish Public Health Observatory Approaches to measuring health inequality to inform our choice. Three different measures were chosen to quantify inequality in the uptake of sight‐testing, potentially helpful for considering strategies to reduce inequality in both eye and wider health outcomes. The measures used included: the odds ratio (OR); the Slope Index of Inequality (SII) and the Relative Index of Inequality (RII). SII measures the absolute inequality in sight‐testing, for example between the most and the least affluent areas, while RII measures the ratio between sight‐testing in the most affluent and the least affluent areas. The OR provides a simple assessment of the greater number of sight‐tests undertaken in the more affluent areas relative to the more deprived areas, whereas the SII and RII measures are regression based and therefore consider the whole population. Rate and number of sight tests by GOS contractors by deprivation decile Overall, the 12,941,325 NHS sight‐tests for 2022–2023 were provided by 5622 GOS contractors in England, including domiciliary sight‐tests. In considering the impact of deprivation, the average rate of sight‐testing by GOS contractors per 1000 population was estimated in each decile of deprivation (i.e., based on the IMD decile of the GOS contractor practices), illustrated in Figure . In the most deprived decile, the rate of sight‐testing was approximately one quarter of that seen in the most affluent decile; a finding likely reflecting both a lack of uptake in more deprived areas but also fewer (and potentially smaller) practices in these more deprived areas. To understand the uptake of sight‐tests across IMD deciles, Figure shows the average number of sight‐tests undertaken by GOS contractors within each deprivation decile. In the most affluent decile, contractors undertook an average of ~2200 NHS sight‐tests in 2022–2023, while in the most deprived decile in the same year the average number of sight‐tests per contractor was half this figure, at ~1100 sight‐tests. Inequality analysis The OR was calculated by dividing the rate of sight‐testing in the most affluent decile (D1) by the rate of sight‐testing in the most deprived decile (D10). A relative OR of 5.29 (95% CI 5.27–5.30) is estimated, suggesting that people accessing sight‐testing in the most affluent areas are over five times more likely to have sight‐tests. SII and RII measures were calculated using rates per 1000 population as described earlier. Both these indices of inequality should equate to zero if there was an absence of inequality. For overall national data in 2022–2023, SII was 333.52 (95% CI 333.52–333.53) and RII was 6.40 (95% CI 6.39–6.40), with both indices showing substantial inequality in sight‐testing uptake, with patients in the least deprived areas accessing GOS services substantially more than those in the most deprived areas. Sight testing by age Analysis for the three age categories (which differ slightly from GOS sight‐testing eligibility categories) showed a marked difference in the number of tests undertaken. Figure illustrates the average number of NHS sight‐tests undertaken by GOS contractors within each IMD decile. Interestingly, the least deprived two deciles (i.e., contractors in affluent areas) show a rate of sight‐testing >1000 per 1000 population for those 65 years and over, consistent with greater sight‐testing per contractor in these affluent locations; areas where it is known there are more contractors and likely larger practices providing services. In terms of disparities by deprivation decile, inequality analyses by age‐categories (summarised for the rate of sight‐testing in Table ) replicates the position shown in overall national sight‐testing analyses, with higher rates of sight‐testing in least deprived deciles and lower rates in more deprived deciles. The OR was smallest in the 0–16 years of age category at 3.89 (95% CI 3.87–3.92), with the OR being similar in the other age categories, that is, OR of 7.57 (95% CI 7.52–7.62) for ages 17–64 years and OR of 7.36 (95% CI 7.34–7.39) for those aged 65 and over. This finding suggests that inequality is lower in children in this specific context, although children in more affluent areas are still over three times more likely to have a sight‐test than those in more deprived areas. In adults, those in more affluent areas were over seven times more likely to have NHS sight‐tests in 2022–2023 than those in more deprived areas. RII was smallest (i.e., least inequality) in the 0–16 years of age category (4.02), with RII being similar in the other categories, that is, 17–64 years (RII 19.29) and the 65 years and over group (RII 19.54). For children, the gradient reflects that seen overall, with the average number of sight‐tests undertaken in the least deprived area being approximately double that seen in the most deprived areas. Within the over 65 year age‐group, the gradient from most to least deprived was less than that seen in the overall population, although the average number of sight‐tests reduces across deciles until the most deprived decile when it reduces substantially. The greatest disparity was seen in the working age category (where the proportion of NHS funded sight‐tests was smaller, compared to other age categories), with the steep gradient reflecting the average number of sight‐tests being ~three times greater in least compared to most deprived areas. Overall, the 12,941,325 NHS sight‐tests for 2022–2023 were provided by 5622 GOS contractors in England, including domiciliary sight‐tests. In considering the impact of deprivation, the average rate of sight‐testing by GOS contractors per 1000 population was estimated in each decile of deprivation (i.e., based on the IMD decile of the GOS contractor practices), illustrated in Figure . In the most deprived decile, the rate of sight‐testing was approximately one quarter of that seen in the most affluent decile; a finding likely reflecting both a lack of uptake in more deprived areas but also fewer (and potentially smaller) practices in these more deprived areas. To understand the uptake of sight‐tests across IMD deciles, Figure shows the average number of sight‐tests undertaken by GOS contractors within each deprivation decile. In the most affluent decile, contractors undertook an average of ~2200 NHS sight‐tests in 2022–2023, while in the most deprived decile in the same year the average number of sight‐tests per contractor was half this figure, at ~1100 sight‐tests. The OR was calculated by dividing the rate of sight‐testing in the most affluent decile (D1) by the rate of sight‐testing in the most deprived decile (D10). A relative OR of 5.29 (95% CI 5.27–5.30) is estimated, suggesting that people accessing sight‐testing in the most affluent areas are over five times more likely to have sight‐tests. SII and RII measures were calculated using rates per 1000 population as described earlier. Both these indices of inequality should equate to zero if there was an absence of inequality. For overall national data in 2022–2023, SII was 333.52 (95% CI 333.52–333.53) and RII was 6.40 (95% CI 6.39–6.40), with both indices showing substantial inequality in sight‐testing uptake, with patients in the least deprived areas accessing GOS services substantially more than those in the most deprived areas. Analysis for the three age categories (which differ slightly from GOS sight‐testing eligibility categories) showed a marked difference in the number of tests undertaken. Figure illustrates the average number of NHS sight‐tests undertaken by GOS contractors within each IMD decile. Interestingly, the least deprived two deciles (i.e., contractors in affluent areas) show a rate of sight‐testing >1000 per 1000 population for those 65 years and over, consistent with greater sight‐testing per contractor in these affluent locations; areas where it is known there are more contractors and likely larger practices providing services. In terms of disparities by deprivation decile, inequality analyses by age‐categories (summarised for the rate of sight‐testing in Table ) replicates the position shown in overall national sight‐testing analyses, with higher rates of sight‐testing in least deprived deciles and lower rates in more deprived deciles. The OR was smallest in the 0–16 years of age category at 3.89 (95% CI 3.87–3.92), with the OR being similar in the other age categories, that is, OR of 7.57 (95% CI 7.52–7.62) for ages 17–64 years and OR of 7.36 (95% CI 7.34–7.39) for those aged 65 and over. This finding suggests that inequality is lower in children in this specific context, although children in more affluent areas are still over three times more likely to have a sight‐test than those in more deprived areas. In adults, those in more affluent areas were over seven times more likely to have NHS sight‐tests in 2022–2023 than those in more deprived areas. RII was smallest (i.e., least inequality) in the 0–16 years of age category (4.02), with RII being similar in the other categories, that is, 17–64 years (RII 19.29) and the 65 years and over group (RII 19.54). For children, the gradient reflects that seen overall, with the average number of sight‐tests undertaken in the least deprived area being approximately double that seen in the most deprived areas. Within the over 65 year age‐group, the gradient from most to least deprived was less than that seen in the overall population, although the average number of sight‐tests reduces across deciles until the most deprived decile when it reduces substantially. The greatest disparity was seen in the working age category (where the proportion of NHS funded sight‐tests was smaller, compared to other age categories), with the steep gradient reflecting the average number of sight‐tests being ~three times greater in least compared to most deprived areas. There is a stark difference in the number of NHS sight‐tests and the rate of sight‐testing per decile of deprivation by GOS contractors operating in England during 2022–2023, with NHS services preferentially being accessed by patients attending contractors in more affluent areas. We have previously observed that primary care optometry contractors in England are inequitably distributed in relation to deprivation. In the present analyses, it is apparent that there are almost twice as many NHS sight‐tests being undertaken by contractors in the least compared to the most deprived areas, with the rate of sight‐testing in the most deprived decile being approximately one quarter of that seen in the most affluent decile. These data reflect recent unwarranted quantitative evidence for significant inequality in the uptake of sight‐testing between affluent and deprived areas, with a consistent and significant pattern across deciles of deprivation. The rate of sight‐testing in areas of more affluence being at a rate of >1000 sight‐tests per 1000 population in the over 65 years of age category merits further comment. This finding is likely reflective, in part, of patients' travel away from residential localities to seek eyecare and/or competitive pricing (i.e., a move to more affluent areas for sight‐testing, locations where there are more practices available compared to deprived areas). It is acknowledged that the funding model for primary care optometry relies on the cross‐subsidisation of GOS sight‐testing (which incorporates eye health examinations) by the sale of spectacles, linking with services being situated where most economically viable. In turn, this acknowledgement coexists with concern about costs and the perceived pressure to buy spectacles for some. There is also potential for greater numbers having shorter testing intervals when deemed clinically necessary (and therefore double counting within the year) in the case of the more elderly patients, but this possible contributor appears less impactful in the more deprived deciles. Indeed, shorter testing intervals may impact all deciles to some extent. More broadly, risk factors for eye disease conferring GOS sight‐testing eligibility may also be more prevalent in deprived areas, for example, the higher prevalence of diabetes linked to socioeconomic deprivation, thereby amplifying concern about the potential impact of a differential uptake in sight‐testing and/or other eyecare services. Limitations to analyses of these data merit discussion. First, these analyses reflected only GOS NHS sight‐testing data versus data also including private eye examinations, although the former reflects in excess of 70% of all primary care eye examinations, while private eye examinations may be argued to be more reflective of affluent area activity. Second, the NHS sight‐testing rate estimated here is simply the number of sight‐tests undertaken in each IMD decile divided by the population of each decile. We are not able to comment on populations of patients served by contractors per se. We acknowledge that a limitation of these data is that we do not know where the individuals accessing sight‐testing reside in relation to the contractors. For example, claims for domiciliary sight‐tests may be processed through a head office location, and not the area within which sight‐testing was provided. Further, allocation of one IMD score per contractor is highly unlikely to reflect the whole population served by the contractor, which likely has patients with different IMD levels, an acknowledgement of the well‐reported complexity of socioeconomic status, neighbourhood deprivation and health. Considering deprivation at the GOS contractor level versus that of patients accessing NHS sight‐testing is likely to mean these analyses are unlikely to capture the impact of local neighbourhood deprivation, with IMD values in high streets not necessarily reflecting the neighbourhood in which they are situated. Further, data are not available regarding eligibility for NHS sight‐testing criteria beyond age, so for example within the 17–64 years of age group, their reason for eligibility may be income related or due to the presence of diabetes, glaucoma or risk of glaucoma. Previous local area studies using 2011 data and 2013–2015 data found higher GOS uptake in the more deprived quintiles among 16–59 year olds due to means tested social benefits. These 2022–2023 data for England do not show this effect. Factors such as high inflation and the increased cost of living may be possible reasons why people in a similar age group were not accessing NHS sight‐tests, albeit the data are England‐wide, and therefore may not show the variation observed at the regional, system and place levels. Further research and open access to data are required at the system and place levels to better understand such differences and local variations. It is also unclear what the impact of patients' home and/or work locations and their access to private or public transport (and associated costs) might have in relation to these data, since it is known, for example, that distance to practitioners is a perceived barrier to the uptake of eyecare. Third, it may be argued that while these ‘big data’ reflect unbiased large numbers with very tight confidence intervals, there may be unknown limitations owing to collation errors within the dataset being interrogated. Finally, due to changes in geography following the 2021 census and because the deprivation data have not been updated to reflect such changes, it was necessary to use the 2020 LSOA mid‐year population estimates from ONS. The IMD score was updated in 2019 and while this version may be outdated, despite limitations, IMD remains the best readily available method for examining deprivation. We believe that the various limitations discussed here are unlikely to have an effect that unpicks the stark major trends observed. There is longstanding recognition of the importance of the impact of deprivation in eyecare, but no real evidence concerns conveyed previously are resulting in plans to change the system for primary eyecare. Data such as those highlighted here may support policy makers to target finite NHS resources to those most in need, versus supporting a system favouring affluence. Research should be facilitated by healthcare datasets being made freely available, facilitating ongoing evaluation of publicly funded services, thereby allowing commissioners to use data to ensure equity of access to optometry services. Indeed, one outcome of this research is to call for better arrangements for the sharing of data with researchers, and if place‐of‐residence data could be recorded within an agreed range that satisfied pseudonymisation but allowed enough detail to relate to IMD locations, such a step would increase the scope for research to explore inequalities in the uptake of services. Nevertheless, and based on what is already known, Shickle et al. concluded that the GOS contract may be contrary to public health interests, proposing that different approaches are needed to address eye health inequalities to reduce preventable sight loss. A decade on, these data appear to reinforce a still outstanding need for change. Robert A. Harper: Conceptualization (equal); data curation (supporting); formal analysis (supporting); methodology (equal); project administration (equal); writing – original draft (lead); writing – review and editing (lead). Jeremy Hooper: Conceptualization (equal); data curation (lead); formal analysis (lead); project administration (equal); software (lead); writing – original draft (supporting); writing – review and editing (supporting). David J. Parkins: Conceptualization (supporting); methodology (supporting); writing – original draft (supporting); writing – review and editing (supporting). Cecilia H. Fenerty: Conceptualization (equal); data curation (supporting); formal analysis (supporting); methodology (supporting); writing – original draft (supporting); writing – review and editing (supporting). James Roach: Conceptualization (equal); data curation (supporting); methodology (supporting); project administration (supporting); software (supporting); writing – original draft (supporting); writing – review and editing (supporting). Michael Bowen: Conceptualization (supporting); methodology (supporting); writing – original draft (supporting); writing – review and editing (supporting). There was no funding for this research. Robert Harper and David Parkins are Life Fellows of the College of Optometrists where Michael Bowen is Director of research. David Parkins has an advisory role at NHSE London. |
Bibliometric Analysis of Ophthalmology Publications from Arab Countries between 2012 and 2022 | 1b1f5bcc-79f3-496e-bc4f-48a438685a41 | 10903712 | Ophthalmology[mh] | The field of biomedical research has grown tremendously over the last few decades. While a significant upward trend in publication volume has been observed globally, there have been wide variations in research productivity across different regions and countries due to differences in health-care systems, educational programs, and funding support programs. Within the Arab region, political, socioeconomic, and security dynamics have also been found to influence scientific productivity. Based on bibliometric reviews of biomedical articles published between 1988 and 2002 by Tadmouri and Bissar-Tadmouri and between 2001 and 2005 by Benamer and Benamer, the scientific production of Arab nations was found to be significantly lower compared to other countries in the world. In the field of ophthalmology alone, research productivity from Arab league countries was also found to relatively lag behind. Using the time frame 1900–2012, research output in ophthalmology from Arab countries (0.96%) represented <1% of the global research productivity in ophthalmology. While the aforementioned barriers to research activity and scientific publication likely contribute to this research disparity, it is also important to note that the accuracy of prior bibliometric analysis was affected by the lack of inclusion and indexing of many of the journals commonly used for publication by Arab-based authors within the standard databases such as ISI Web of Science, Scopus, and MEDLINE for bibliometric analyses. Given the recent indexing of several major regional journals such as the Middle East African Journal of Ophthalmology in 2020, Saudi Journal of Ophthalmology in 2020, and the Journal of the Egyptian Ophthalmological Society in 2021 within the ISI Web of Science database, we sought to re-examine the status of ophthalmology research and provide a more accurate presentation of the geographic trends of research output and scientific productivity in Arab countries. In this study, we evaluated the research output of authors from Arab-based institutions in the field of ophthalmology from 2012 to 2022. This cross-sectional study involved a bibliometric analysis of all original research and review articles published in Ophthalmology Journals by ophthalmologists, optometrists, and researchers working in vision science with an affiliation with an institution from an Arab League nation between January 1, 2012, and December 31, 2022. As the study did not involve the evaluation or management of human participants, ethics committee review and approval were waived by the local institutional review board. This study abided by the Strengthening the Reporting of Observational Studies in Epidemiology reporting guideline for cross-sectional studies. The data were extracted from the ISI Web of Science database last March 28, 2023. Using the advanced search engine of ISI Web of Science, the search was limited to the “ophthalmology” category tag. Using the countries filter, Arab-based scientific articles were identified by restricting the search to the following countries: Algeria, Bahrain, Comoros, Djibouti, Egypt, Iraq, Jordan, Kuwait, Lebanon, Oman, Libya, Mauritania, Morocco, Palestine, Qatar, Saudi Arabia, Somalia, Sudan, Syria, Tunisia, United Arab Emirates, and Yemen. To evaluate the more recent production in the field of ophthalmology, the period of analysis was restricted to articles published over the last decade between January 1, 2012, and December 31, 2022. The analysis was further limited to documents classified as articles, articles in press, and reviews. Letters, correspondences, and replies were excluded from this analysis. No other exclusion criteria for language or other publication parameters were applied. All collected data were imported into Microsoft Excel (Microsoft Corp., Redmond, WA, USA) for analysis. The number of articles was used as the indicator of quantity for scientific productivity. The countries and institutions were ranked according to the number of articles produced. For the years 2012–2022, 4292 articles published in Ophthalmology Journals by authors from Arab-based institutions were identified. The number of publications by Arab authors in journals indexed in ISI Web of Science increased steadily during the early years of this decade. During the first 2 years of the COVID-19 pandemic between 2020 and 2021, the annual output of research articles nearly quadrupled from 169 in 2012 to 621 in 2020 and 645 in 2021. In 2022, the number of publications slightly decreased to 527, which was still substantially higher than prepandemic levels . Overall, a 2.11-fold increase was observed within this decade. depicts the global distribution of ophthalmology articles published between 2012 and 2022. Both Egypt and Saudi Arabia ranked within the top 25 countries with the highest number of publications worldwide. Within the Arab League , the highest number of ophthalmology articles were published from Egypt ( n = 1653, 38.51%). This was followed by Saudi Arabia ( n = 1526, 32.74%), United Arab Emirates ( n = 338, 7.88%), Lebanon ( n = 299, 6.97%), and Tunisia ( n = 254, 5.92%). According to the institution affiliation within Arab nations , King Khaled Eye Specialist Hospital (KKESH) in Saudi Arabia ranked the highest in terms of scientific productivity with 644 articles, followed by the King Saud University in Saudi Arabia with 585 articles and the Cairo University in Egypt with 393 articles. In terms of language, the majority of the articles produced were in English ( n = 4136, 96.37%) while the rest were in French ( n = 151, 3.52%), German ( n = 3, 0.07%), and Spanish ( n = 2, 0.05%). shows the top 25 peer-reviewed journals that were used for publication by Arab-affiliated ophthalmology researchers. Clinical ophthalmology (8.11%) was the most commonly used, followed by the Saudi Journal of Ophthalmology (5.78%). Analysis of biomedical research and publications in a country or group of countries is an important tool to monitor progress and trends in research and scientific activity. Research productivity can be quantitatively measured in terms of the number of publications in peer-reviewed journals. While a number of bibliometric analyses in the Arab region have been previously published by various authors, recent indexing of major Arab-based Ophthalmology Journals within the ISI World of Science database, as well as the progress toward open research in ophthalmology, has provided the opportunity to comprehensively assess the wider breadth of research and accurately evaluate research productivity within the region. This bibliometric study of publications of research from Arab nations in the field of ophthalmic and vision research shows that research productivity has substantially increased over the last decade. Notably, a sharp spike in publication volume was observed between 2020 and 2021 during the COVID-19 pandemic. This surge in publications fueled by the COVID-19 pandemic was similarly observed across all biomedical fields. Although the total number of publications in 2022 ( n = 645) had decreased compared to 2021 ( n = 527), the annual volume in 2022 is substantially higher than prepandemic levels. Overall, a 2-fold overall increase in research productivity was observed over the last decade. This trend in research productivity follows the exponential growth in publications not only in the field of ophthalmology but in biomedical research in general. Several studies have discussed the factors that have led to the relative paucity of biomedical publications in the Arab region. While the current analysis finds that both Egypt (top 20) and Saudi Arabia (top 21) have now ranked among the top 25 countries worldwide in terms of the number of ophthalmology publications in the last decade , the rest of Arab nations still lag behind in terms of research productivity. In fact, a close review of the relative contributions of different countries in the Arab region to the total number of publications in the field of ophthalmology showed that three-quarters of the total production in the last decade was contributed by authors from only three countries including Egypt (39%), Saudi Arabia (33%), and the United Arab Emirates (8%). Conversely, low-income Arab states such as Comoros, Djibouti, Mauritania, and Somalia produced the least number of publications in ophthalmology research in the studied period. Moreover, countries affected by wars and internal conflicts including Iraq, Libya, Palestine, Somalia, Sudan, Syria, and Yemen have also fared relatively poorly in terms of research output. These findings are fairly consistent with the results of previous studies. While scientific publications are broadly recognized as the primary indicator of research productivity, certain studies have indicated that raw counts should be normalized to indicators such as population size to provide a more accurate presentation of the status within each country. When the number of publications is adjusted to each by the population size in 2022, Lebanon ranked first with 54 ophthalmic publications per million population, followed by Saudi Arabia with 42 publications per million population and the United Arab Emirates with 35 publications per million within the studied time frame. Saudi Arabia, Egypt, Lebanon, and the United Arab Emirates can, therefore, be considered the leading institutions for ophthalmic research in the Arab League. In terms of individual research institutions, seven of the top ten performers were university-based centers while the rest were hospital-based research centers. In contrast to a previous bibliometric analysis of ophthalmic publications, KKESH has currently outperformed other institutions with the highest productivity in ophthalmic research within the Arab region. Established in 1983, KKESH is one of the largest specialty eye hospitals in the world with a dedicated budget to support research-related activities. The current position of KKESH among other university-based research centers reflects how the allocation of research funds to academic settings outside the university setting can further promote ophthalmology research and increase overall scientific productivity. This study should also be viewed in the light of some limitations. First, in an effort to avoid count errors related to entry duplicates, ISI Web of Science was the only database used to identify the publications for analysis. Articles published in journals that have contributed to scientific productivity but were not indexed by the ISI Web of Science at the time of analysis were not considered. Second, while no search restrictions were applied to the type of article authors, the articles were identified under the “ophthalmology” category tag which excluded articles in basic science and general internal medicine journals. While this may have resulted in an underestimation of total output, the 116 Ophthalmology Journals included in this analysis represent the most important journals in the field of ophthalmology within the Arab region and internationally. Over the last decade, the overall productivity of research in the field of ophthalmology has significantly increased. The majority of the articles were published by authors from Egypt and Saudi Arabia with KKESH as the most prolific institution among Arab nations within the studied time frame. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest. Nil. There are no conflicts of interest. |
Utilisation pattern of ophthalmic services in Ashanti Region, Ghana | 71acc566-1523-4863-b3a7-47e24daa1490 | 10913157 | Ophthalmology[mh] | Eye care utilisation is among the essential factors considered in addressing the rising global prevalence of visual impairment and avoidable blindness. Visual impairment and blindness remain a major public health issue globally, especially in the low- and middle-income regions. Although the elimination of avoidable blindness and visual impairment, especially from infectious diseases, has seen some significant progress in developed countries, it is still a major public health issue in the developing and under-developed countries. The World Health Organization (WHO) and the International Agency for the Prevention of Blindness (IAPB), have, for more than a decade, been working towards the eradication of preventable blindness by the year 2020 through the VISION 2020: The Right to Sight initiative. Fatouhi et al., revealed in their study that eye care service utilisation, is a crucial factor in realising the goals of ‘VISION 2020’. Therefore, understanding utilisation trends across Ghana could assist in designing strategies to improve eye care utilisation. Best practice in optometry and ophthalmology recommends regular visits to eye care professionals, as routine eye examinations support the early detection of ocular defects and associated systemic, sometimes potentially life-threatening, conditions. Early interventions to restore, manage or treat such conditions also improve the prognosis. Furthermore, in cases of low vision where optimal sight cannot be restored, the quality of life of the patients can be improved through rehabilitation and other supportive interventions. It is essential that people access ophthalmic services regularly, because of the availability of interventions to prevent sight loss as a result of eye diseases or other causes of avoidable vision impairment. However, a study among older adults reported that the uptake of eye examinations in low, lower-middle, upper-middle and high-income countries was only 10%, 24%, 22% and 37%, respectively. Studies have been conducted on the low level of utilisation of ophthalmic services in both developed and developing countries to understand the factors responsible for the low utilisation of ophthalmic services. , , Studies have reported that individuals do not routinely access ophthalmic services in Ghana. , , However, these studies fell short of finding the reasons for the low utilisation of ophthalmic services, which is crucial in developing strategies to improve utilisation of the service. A clear knowledge of ophthalmic service utilisation and its related factors will provide a framework, which stakeholders can use to develop strategies to improve eye care utilisation. This will reduce the prevalence of preventable blindness and visual impairment among people. This study sought to determine the utilisation of ophthalmic services and the associated factors in the Ashanti region of Ghana, to help inform strategies to improve utilisation of the service. Study design This study employed a population-based, cross-sectional descriptive design. To be included, participants had to fulfil the following requirements: residing in Ghana’s Ashanti region at the time of data collection and being at least 18 years old. Setting The sample was drawn from 10 out of the 43 districts in the Ashanti region of Ghana. Study population and sampling strategy The Fluid Survey Sample size calculator was used to determine a minimum sample size of 1537: Sample size = z 2 × p ( 1 − p ) e 2 1 + ( z 2 × p ( 1 − p ) e 2 N ) [Eqn 1] where N = population size, e = margin of error (percentage in decimal form), z = z -score and p = sample proportion. For the population of 4 780 380 in the Ashanti region of Ghana, the minimum sample size was determined to be 1537 using a 95% confidence level and a 2.5% margin of error (the desired level of precision). A multistage sampling technique was used for this study. By means of a proportionate to population probability technique, 50 electoral areas were chosen from 10 randomly selected districts out of the 43 in the Ashanti region. Of the 10 selected districts, six (Amansie south, Atwima Nwabiagya, Bosomtwe, Kumawu, Amansie West and Offinso North) were rural while four (Asokore Mampong, Asokwa, Ejisu and Old Tafo) were urban. Upon reaching a central point in an electoral area, a bottle was spun to determine a direction. Following that 15 households were selected per electoral area along the direction, making a total of 750 households. Within the households, 1804 individuals met the inclusion criteria, out of which 1615 (89%) agreed to take part in the survey. Data collection The biodata of the respondents (age, gender, education level, employment status and monthly income) as well as their eye care-seeking behaviour and utilisation were collected through a structured interviewer-guided questionnaire. Demographic information and the use of eye care services were determined with the use of items from the WHO multicountry World Health Survey questionnaire. Teaching and research assistants in the Optometry Department at Kwame Nkrumah University of Science and Technology in Kumasi, Ghana, administered all the questionnaires. The evaluation of eye care utilisation was performed by asking participants if they ever had an eye examination at an eye clinic. Andersen’s Behavioural Model was used to investigate factors associated with having had an eye examination. This model put the variables into predisposing, enabling and need categories. Gender and age group served as study predisposing factors. The employment status, monthly income, residential location (rural or urban) and educational attainment were the enabling factors. The need factors were years since the last eye examination, having noticed a change in vision within the last 2 years and having systemic diseases (diabetes, hypertension, sickle cell anaemia). Systemic status (being afflicted by a systemic disease) was determined by self-reported history of diagnosis by a medical practitioner. Data analysis The Statistical Package for the Social Sciences Software, version 25 (SPSS 25) was used to analyse the data after it was entered into a Microsoft Excel worksheet (Microsoft Inc., USA). Analysis, both descriptive and inferential, was performed. Tables were used to display the results. Regression analysis and chi-square test were used for inferential analysis. A significance level of p ≤ 0.05 was applied to all tests. Ethical considerations Ethical approval for the study was obtained from the BREC of the University of KwaZulu-Natal, South Africa (Ref: BREC/00001787/2020) and the Committee on Human Research, Publications and Ethics of the Kwame Nkrumah University of Science and Technology, Kumasi, Ghana (Ref: CHRPE/AP/006/17). Gatekeeper consent was obtained from the Ashanti Regional Directorate of Health Services. Informed written consent was obtained from all the literate survey participants and the legally authorised representatives of illiterate participants. All study procedures adhered to the tenets of the Declaration of Helsinki. This study employed a population-based, cross-sectional descriptive design. To be included, participants had to fulfil the following requirements: residing in Ghana’s Ashanti region at the time of data collection and being at least 18 years old. The sample was drawn from 10 out of the 43 districts in the Ashanti region of Ghana. The Fluid Survey Sample size calculator was used to determine a minimum sample size of 1537: Sample size = z 2 × p ( 1 − p ) e 2 1 + ( z 2 × p ( 1 − p ) e 2 N ) [Eqn 1] where N = population size, e = margin of error (percentage in decimal form), z = z -score and p = sample proportion. For the population of 4 780 380 in the Ashanti region of Ghana, the minimum sample size was determined to be 1537 using a 95% confidence level and a 2.5% margin of error (the desired level of precision). A multistage sampling technique was used for this study. By means of a proportionate to population probability technique, 50 electoral areas were chosen from 10 randomly selected districts out of the 43 in the Ashanti region. Of the 10 selected districts, six (Amansie south, Atwima Nwabiagya, Bosomtwe, Kumawu, Amansie West and Offinso North) were rural while four (Asokore Mampong, Asokwa, Ejisu and Old Tafo) were urban. Upon reaching a central point in an electoral area, a bottle was spun to determine a direction. Following that 15 households were selected per electoral area along the direction, making a total of 750 households. Within the households, 1804 individuals met the inclusion criteria, out of which 1615 (89%) agreed to take part in the survey. The biodata of the respondents (age, gender, education level, employment status and monthly income) as well as their eye care-seeking behaviour and utilisation were collected through a structured interviewer-guided questionnaire. Demographic information and the use of eye care services were determined with the use of items from the WHO multicountry World Health Survey questionnaire. Teaching and research assistants in the Optometry Department at Kwame Nkrumah University of Science and Technology in Kumasi, Ghana, administered all the questionnaires. The evaluation of eye care utilisation was performed by asking participants if they ever had an eye examination at an eye clinic. Andersen’s Behavioural Model was used to investigate factors associated with having had an eye examination. This model put the variables into predisposing, enabling and need categories. Gender and age group served as study predisposing factors. The employment status, monthly income, residential location (rural or urban) and educational attainment were the enabling factors. The need factors were years since the last eye examination, having noticed a change in vision within the last 2 years and having systemic diseases (diabetes, hypertension, sickle cell anaemia). Systemic status (being afflicted by a systemic disease) was determined by self-reported history of diagnosis by a medical practitioner. The Statistical Package for the Social Sciences Software, version 25 (SPSS 25) was used to analyse the data after it was entered into a Microsoft Excel worksheet (Microsoft Inc., USA). Analysis, both descriptive and inferential, was performed. Tables were used to display the results. Regression analysis and chi-square test were used for inferential analysis. A significance level of p ≤ 0.05 was applied to all tests. Ethical approval for the study was obtained from the BREC of the University of KwaZulu-Natal, South Africa (Ref: BREC/00001787/2020) and the Committee on Human Research, Publications and Ethics of the Kwame Nkrumah University of Science and Technology, Kumasi, Ghana (Ref: CHRPE/AP/006/17). Gatekeeper consent was obtained from the Ashanti Regional Directorate of Health Services. Informed written consent was obtained from all the literate survey participants and the legally authorised representatives of illiterate participants. All study procedures adhered to the tenets of the Declaration of Helsinki. Sample characteristics A total of 1615 of the 1804 individuals who were eligible participated in this study, representing a response rate of 89%. The mean age of the participants was 36.2 years, with an age range of 18–82 years. Of the participants, 54.4% were females and 52% lived in urban districts; 48.7% of the respondents were from the age group 18–29 years, followed by the 30–34-year-old group (26.7%). The majority of the respondents (87.7%) had some form of formal education and 25.6% had up to tertiary education. While 68.4% reported that they were in employment, 27.1% of the respondents earned less than 500 cedis (83 US dollars, the lowest wealth index) monthly, while only 0.4% earned more than 5000 cedis (826 US dollars, the highest wealth index). A diagnosis of one or more systemic diseases was reported by 12.3%: 3.9% were diabetic; 9.8% were hypertensive and 0.3% had sickle cell disease. Of the participants, 34.5% reported having observed a change in their vision within the last 2 years, while 85.3% of the participants reported that they felt regular eye examinations were necessary, even without symptoms, and 87.3% believed that children under five required eye examinations. All of the above-mentioned responses are shown in . Eye care utilisation among the respondents shows the percentage of participants who had ever had an eye examination and whether they visit an eye clinic whenever they have an eye problem, across selected variables. The result in was analysed by means of chi-square test. Overall, only 42.8% of the participants had ever had an eye examination and 17.3% always visit an eye facility whenever an eye problem arose. Predisposing factors A higher percentage of males had had an eye examination, compared with females (45.2% vs. 40.8%). However, this was not statistically significant. Utilisation of eye care services increases with age, as 88.4% of participants in the age group ≥ 75 years had had an eye examination, compared with 36.2% in the 18–29-year age group. also shows that age was significantly associated with visiting an eye care facility whenever an eye problem arose ( p < 0.001). Enabling factors Bosomtwe and Asokore Mampong were the only 2 of the 10 districts where more than 50% of the participants (58.6% and 51.6%, respectively) had ever had an eye examination. Kumawu, which is one of the rural districts, registered the lowest (22.8%) eye care utilisation. While 59.9% of participants who were educated up to the tertiary level had sought eye care, all other education levels reported less than 50% utilisation of eye care services. The study also found increased eye care utilisation as income rose. Whereas 83.3% of participants who earn ≥ 5000 cedis had had an eye examination, only 27.1% of those earning ≤ 500 cedis had had an eye examination. Although the employed appeared to have utilised eye care services less than the unemployed (41.9% vs. 44.6%), this was not statistically significant ( p = 0.334). Among the enabling factors, constituency, level of education and monthly income were significantly ( p < 0.001) associated with visiting an eye clinic whenever an eye problem arose. Needs factors Whereas 78.0% of hypertensives and 58.8% of diabetics had sought an eye examination, only 40% of participants with sickle cell anaemia had ever utilised eye care. Hypertension and diabetes were found to be significantly associated with accessing eye care (hypertension, p < 0.001; diabetes, p = 0.004). Of the participants who self-reported vision problem in the last 2 years, 60.7% had ever had an eye examination, compared with 44.4% of those who felt regular eye examinations were necessary. However, both groups were significantly associated with seeking eye care ( p < 0.001 and p = 0.002, respectively). With regard to need factors, diabetes ( p = 0.004), hypertension ( p < 0.001) and vision problems ( p < 0.001) were significantly associated with accessing eye care whenever an eye problem arose. Factors associated with reports of a previous eye examination shows the results of the multiple logistic regression models on whether the participants had ever had an eye examination and the associated factors. Model 1: Predisposing factors People in the older age group showed an increased likelihood of having previously accessed eye care, compared with those between 18 and 29 years of age: 45–59 years, OR = 2.06, p < 0.001; 60–74 years, OR = 3.09, p < 001; ≥ 75 years, OR = 14.0, p < 001. Also, when compared with males, females had a reduced likelihood of previously seeking eye care (OR = 0.77; p = 0.014). Model 2: Predisposing and enabling factors In Model 2, which included predisposing and enabling factors, the associations with predisposing factors were similar to those in Model 1. Regarding the enabling factors, Model 2 showed that participants in Asokore Mampong, Bosomtwi and Ejisu had an increased likelihood of utilising eye care compared with those in Amansie South (Asokore Mampong, OR = 2.00, p = 0.006; Bosomtwi, OR = 3.44, p < 0.001; Ejisu, OR = 2.14, p = 0.006). Model 2 also showed that having higher education was significantly associated with an increased likelihood of having previously accessed eye care when compared with those who did not have any form of formal education (secondary, OR = 1.96, p = 0.002; tertiary, OR = 3.48, p < 0.001). Model 2 went on to show that increased wealth was associated with an increased likelihood of previously accessing eye care, when compared with the lowest wealth index (5001 – 1000, OR = 1.67, p = 0.002; 1001 – 2000, OR = 2.91, p < 0.001, 2001 – 5000, OR = 3.84, p < 0.001). Model 3: Predisposing, enabling and need factors The final model, Model 3, incorporated the predisposing, enabling and need factors. After statistical adjustment, the older age groups were found to be significantly associated with seeking eye care, compared with the 18–29-year-old age group: 30–44 years, OR = 1.73, p < 0.001; 45–59 years, OR = 2.12, p < 0.001; 60–74 years, OR = 3.17, p = 0.001; ≥ 75 years, OR = 12.24, p = 0.008. With regard to districts, Model 3 showed that only participants in Asokore Mampong had an increased likelihood of accessing eye care when compared with Amansie South (Asokore Mampong, OR = 2.51, p < 0.001). Model 3 also showed that education, when compared with no formal education, was significantly associated with eye care utilisation: primary, OR = 1.31, p < 0.001; intermediate, OR = 1.15, p < 0.001; secondary, OR = 2.25, p < 0.001; tertiary, OR = 3.51, p < 0.001. Model 3 further showed that being hypertensive (OR = 1.30, p < 0.001), having self-reported vision problems in the last 2 years (OR = 2.31, p < 0.001) and feeling that regular eye examinations are important (OR = 2.19, p = 0.007), were statistically associated with eye care utilisation. A total of 1615 of the 1804 individuals who were eligible participated in this study, representing a response rate of 89%. The mean age of the participants was 36.2 years, with an age range of 18–82 years. Of the participants, 54.4% were females and 52% lived in urban districts; 48.7% of the respondents were from the age group 18–29 years, followed by the 30–34-year-old group (26.7%). The majority of the respondents (87.7%) had some form of formal education and 25.6% had up to tertiary education. While 68.4% reported that they were in employment, 27.1% of the respondents earned less than 500 cedis (83 US dollars, the lowest wealth index) monthly, while only 0.4% earned more than 5000 cedis (826 US dollars, the highest wealth index). A diagnosis of one or more systemic diseases was reported by 12.3%: 3.9% were diabetic; 9.8% were hypertensive and 0.3% had sickle cell disease. Of the participants, 34.5% reported having observed a change in their vision within the last 2 years, while 85.3% of the participants reported that they felt regular eye examinations were necessary, even without symptoms, and 87.3% believed that children under five required eye examinations. All of the above-mentioned responses are shown in . shows the percentage of participants who had ever had an eye examination and whether they visit an eye clinic whenever they have an eye problem, across selected variables. The result in was analysed by means of chi-square test. Overall, only 42.8% of the participants had ever had an eye examination and 17.3% always visit an eye facility whenever an eye problem arose. Predisposing factors A higher percentage of males had had an eye examination, compared with females (45.2% vs. 40.8%). However, this was not statistically significant. Utilisation of eye care services increases with age, as 88.4% of participants in the age group ≥ 75 years had had an eye examination, compared with 36.2% in the 18–29-year age group. also shows that age was significantly associated with visiting an eye care facility whenever an eye problem arose ( p < 0.001). Enabling factors Bosomtwe and Asokore Mampong were the only 2 of the 10 districts where more than 50% of the participants (58.6% and 51.6%, respectively) had ever had an eye examination. Kumawu, which is one of the rural districts, registered the lowest (22.8%) eye care utilisation. While 59.9% of participants who were educated up to the tertiary level had sought eye care, all other education levels reported less than 50% utilisation of eye care services. The study also found increased eye care utilisation as income rose. Whereas 83.3% of participants who earn ≥ 5000 cedis had had an eye examination, only 27.1% of those earning ≤ 500 cedis had had an eye examination. Although the employed appeared to have utilised eye care services less than the unemployed (41.9% vs. 44.6%), this was not statistically significant ( p = 0.334). Among the enabling factors, constituency, level of education and monthly income were significantly ( p < 0.001) associated with visiting an eye clinic whenever an eye problem arose. Needs factors Whereas 78.0% of hypertensives and 58.8% of diabetics had sought an eye examination, only 40% of participants with sickle cell anaemia had ever utilised eye care. Hypertension and diabetes were found to be significantly associated with accessing eye care (hypertension, p < 0.001; diabetes, p = 0.004). Of the participants who self-reported vision problem in the last 2 years, 60.7% had ever had an eye examination, compared with 44.4% of those who felt regular eye examinations were necessary. However, both groups were significantly associated with seeking eye care ( p < 0.001 and p = 0.002, respectively). With regard to need factors, diabetes ( p = 0.004), hypertension ( p < 0.001) and vision problems ( p < 0.001) were significantly associated with accessing eye care whenever an eye problem arose. A higher percentage of males had had an eye examination, compared with females (45.2% vs. 40.8%). However, this was not statistically significant. Utilisation of eye care services increases with age, as 88.4% of participants in the age group ≥ 75 years had had an eye examination, compared with 36.2% in the 18–29-year age group. also shows that age was significantly associated with visiting an eye care facility whenever an eye problem arose ( p < 0.001). Bosomtwe and Asokore Mampong were the only 2 of the 10 districts where more than 50% of the participants (58.6% and 51.6%, respectively) had ever had an eye examination. Kumawu, which is one of the rural districts, registered the lowest (22.8%) eye care utilisation. While 59.9% of participants who were educated up to the tertiary level had sought eye care, all other education levels reported less than 50% utilisation of eye care services. The study also found increased eye care utilisation as income rose. Whereas 83.3% of participants who earn ≥ 5000 cedis had had an eye examination, only 27.1% of those earning ≤ 500 cedis had had an eye examination. Although the employed appeared to have utilised eye care services less than the unemployed (41.9% vs. 44.6%), this was not statistically significant ( p = 0.334). Among the enabling factors, constituency, level of education and monthly income were significantly ( p < 0.001) associated with visiting an eye clinic whenever an eye problem arose. Whereas 78.0% of hypertensives and 58.8% of diabetics had sought an eye examination, only 40% of participants with sickle cell anaemia had ever utilised eye care. Hypertension and diabetes were found to be significantly associated with accessing eye care (hypertension, p < 0.001; diabetes, p = 0.004). Of the participants who self-reported vision problem in the last 2 years, 60.7% had ever had an eye examination, compared with 44.4% of those who felt regular eye examinations were necessary. However, both groups were significantly associated with seeking eye care ( p < 0.001 and p = 0.002, respectively). With regard to need factors, diabetes ( p = 0.004), hypertension ( p < 0.001) and vision problems ( p < 0.001) were significantly associated with accessing eye care whenever an eye problem arose. shows the results of the multiple logistic regression models on whether the participants had ever had an eye examination and the associated factors. Model 1: Predisposing factors People in the older age group showed an increased likelihood of having previously accessed eye care, compared with those between 18 and 29 years of age: 45–59 years, OR = 2.06, p < 0.001; 60–74 years, OR = 3.09, p < 001; ≥ 75 years, OR = 14.0, p < 001. Also, when compared with males, females had a reduced likelihood of previously seeking eye care (OR = 0.77; p = 0.014). Model 2: Predisposing and enabling factors In Model 2, which included predisposing and enabling factors, the associations with predisposing factors were similar to those in Model 1. Regarding the enabling factors, Model 2 showed that participants in Asokore Mampong, Bosomtwi and Ejisu had an increased likelihood of utilising eye care compared with those in Amansie South (Asokore Mampong, OR = 2.00, p = 0.006; Bosomtwi, OR = 3.44, p < 0.001; Ejisu, OR = 2.14, p = 0.006). Model 2 also showed that having higher education was significantly associated with an increased likelihood of having previously accessed eye care when compared with those who did not have any form of formal education (secondary, OR = 1.96, p = 0.002; tertiary, OR = 3.48, p < 0.001). Model 2 went on to show that increased wealth was associated with an increased likelihood of previously accessing eye care, when compared with the lowest wealth index (5001 – 1000, OR = 1.67, p = 0.002; 1001 – 2000, OR = 2.91, p < 0.001, 2001 – 5000, OR = 3.84, p < 0.001). Model 3: Predisposing, enabling and need factors The final model, Model 3, incorporated the predisposing, enabling and need factors. After statistical adjustment, the older age groups were found to be significantly associated with seeking eye care, compared with the 18–29-year-old age group: 30–44 years, OR = 1.73, p < 0.001; 45–59 years, OR = 2.12, p < 0.001; 60–74 years, OR = 3.17, p = 0.001; ≥ 75 years, OR = 12.24, p = 0.008. With regard to districts, Model 3 showed that only participants in Asokore Mampong had an increased likelihood of accessing eye care when compared with Amansie South (Asokore Mampong, OR = 2.51, p < 0.001). Model 3 also showed that education, when compared with no formal education, was significantly associated with eye care utilisation: primary, OR = 1.31, p < 0.001; intermediate, OR = 1.15, p < 0.001; secondary, OR = 2.25, p < 0.001; tertiary, OR = 3.51, p < 0.001. Model 3 further showed that being hypertensive (OR = 1.30, p < 0.001), having self-reported vision problems in the last 2 years (OR = 2.31, p < 0.001) and feeling that regular eye examinations are important (OR = 2.19, p = 0.007), were statistically associated with eye care utilisation. People in the older age group showed an increased likelihood of having previously accessed eye care, compared with those between 18 and 29 years of age: 45–59 years, OR = 2.06, p < 0.001; 60–74 years, OR = 3.09, p < 001; ≥ 75 years, OR = 14.0, p < 001. Also, when compared with males, females had a reduced likelihood of previously seeking eye care (OR = 0.77; p = 0.014). In Model 2, which included predisposing and enabling factors, the associations with predisposing factors were similar to those in Model 1. Regarding the enabling factors, Model 2 showed that participants in Asokore Mampong, Bosomtwi and Ejisu had an increased likelihood of utilising eye care compared with those in Amansie South (Asokore Mampong, OR = 2.00, p = 0.006; Bosomtwi, OR = 3.44, p < 0.001; Ejisu, OR = 2.14, p = 0.006). Model 2 also showed that having higher education was significantly associated with an increased likelihood of having previously accessed eye care when compared with those who did not have any form of formal education (secondary, OR = 1.96, p = 0.002; tertiary, OR = 3.48, p < 0.001). Model 2 went on to show that increased wealth was associated with an increased likelihood of previously accessing eye care, when compared with the lowest wealth index (5001 – 1000, OR = 1.67, p = 0.002; 1001 – 2000, OR = 2.91, p < 0.001, 2001 – 5000, OR = 3.84, p < 0.001). The final model, Model 3, incorporated the predisposing, enabling and need factors. After statistical adjustment, the older age groups were found to be significantly associated with seeking eye care, compared with the 18–29-year-old age group: 30–44 years, OR = 1.73, p < 0.001; 45–59 years, OR = 2.12, p < 0.001; 60–74 years, OR = 3.17, p = 0.001; ≥ 75 years, OR = 12.24, p = 0.008. With regard to districts, Model 3 showed that only participants in Asokore Mampong had an increased likelihood of accessing eye care when compared with Amansie South (Asokore Mampong, OR = 2.51, p < 0.001). Model 3 also showed that education, when compared with no formal education, was significantly associated with eye care utilisation: primary, OR = 1.31, p < 0.001; intermediate, OR = 1.15, p < 0.001; secondary, OR = 2.25, p < 0.001; tertiary, OR = 3.51, p < 0.001. Model 3 further showed that being hypertensive (OR = 1.30, p < 0.001), having self-reported vision problems in the last 2 years (OR = 2.31, p < 0.001) and feeling that regular eye examinations are important (OR = 2.19, p = 0.007), were statistically associated with eye care utilisation. Coverage is a crucial component in the assessment of healthcare system performance. In this study, therefore, participants’ self-reported history of ever having visited an eye care facility for their eye care needs was used as a measure of eye care utilisation. This study revealed that 57.2% of the participants had never had their eyes examined, while only 25% had had their eyes examined at least once in the previous 2 years. The study found that age, district of residence, level of education, monthly income, presence of systemic disease (diabetes, hypertension), self-reported vision problems and feeling that regular eye examinations are important, were significantly associated with eye care utilisation. Gender was not statistically associated with eye care utilisation in this study. However, a higher proportion of males had utilised eye care, compared with females. More than half (57.2%) of the participants had never utilised eye care services, which was in agreement with several other studies. , , , , Possible factors that could serve as barriers to ophthalmic services utilisation include availability, affordability and accessibility. This study found age to be significantly associated with eye care utilisation. The increased likelihood of the aged seeking eye care, compared with the young, can be attributable to age-related ophthalmic conditions. The older one gets, the higher one’s risk of developing ophthalmic conditions and the more likely one is to seek eye care. This is consistent with studies in both developing and developed countries. District of residence was also found to be associated with eye care utilisation. Residents in Asokore Mampong, Ejisu and Bosomtwi were significantly more likely to seek eye care than those in other districts. Regional differences in eye care utilisation have been reported by other studies. , This may be attributed to differences in awareness about the need for regular eye examinations and the availability of ophthalmic services in the communities. Asokore Mampong and Ejisu are urban districts that have a number of public and private eye care facilities available to their people. Although Bosomtwi is a rural constituency, it has two well-equipped ophthalmic mission clinics operated under the auspices of the Christian Health Association of Ghana (CHAG), which serve patients across the region. This may be one of the reasons why a higher proportion of the participants in Bosomtwi, compared with the other districts, were associated with seeking eye care, despite it being a rural district. Level of education was associated with eye care utilisation. Participants with a higher level of education were more likely to utilise eye care than the less educated. This may be attributed to greater knowledge and perhaps more concern about their eye health. Fotouhi et al. also reported an increased likelihood of utilising eye care with higher levels of education in an Iranian population and presumed educated people usually belong to the higher socioeconomic class, which facilitated greater access to eye care in terms of affordability. Other studies , , also reported a positive association between education and eye care use. Monthly income was also found to be associated with increased likelihood of utilising eye care services. A plausible explanation could be the general socioeconomic status of higher-income earners. They may, therefore, find ophthalmic services more affordable and thus have greater access compared with low-income earners. This observation was reported in other low- and middle-income countries. , , , Self-reported vision problems were also associated with an increased likelihood of utilising eye care services, as such problems may affect the activities of daily living. Vision can also be a determinant in the career aspirations of people, and thus its loss is linked to the loss of certain occupations, which may drive poverty and mental health conditions, among others. Ahmad et al. and Akuffo et al. reported a higher probability of persons with vision-related problems seeking eye care services. People with symptoms of blurred vision, or any other vision-related problem, will generally be more likely to seek eye care. Participants who felt regular eye examinations were important were significantly associated with an increased likelihood of seeking eye care. People who are aware of the need for regular eye examinations will know more about the preventative aspects of vision loss than those who are unaware, thus, increasing their likelihood of using eye care services. Regular eye examinations can detect ophthalmic conditions in their earliest stage when they are most treatable to prevent vision loss, as many vision-threatening eye diseases including glaucoma, macular degeneration, cataracts or diabetic retinopathy have no or minimal symptoms until the disease has progressed. A significant association was also found between the presence of a systemic disease, such as diabetes or hypertension and eye care utilisation. This could be a result of the recommendation generally given to diabetics and hypertensives to monitor their eyes at least once every 2 years because of the risk of developing retinopathies after some years of having these conditions. Other studies , reported similar observations. They recommended health education and awareness campaigns on the benefits of seeking ophthalmic services timeously to prevent visual impairment. To the authors’ knowledge, this is a pioneering study assessing ophthalmic service utilisation in the Ashanti region of Ghana, with the major advantage of using population-based evidence for eye care utilisation. This suggests that its findings could be generalised to the entire population of the Ashanti region of Ghana. However, our study has some limitations. The factor used to measure the utilisation of ophthalmic services was self-reported, which may have been affected by recall bias: recalling the period since the last eye examination. Data were generated from 10 of the 43 districts in the region because of logistical constraints. Nonetheless, the results from this study could assist in developing strategies to improve ophthalmic service utilisation in the Ashanti region and in Ghana as a whole. In conclusion, the study revealed an alarmingly low use of ophthalmic services in the Ashanti region of Ghana. Ophthalmic service utilisation was found to be associated with some predisposing, enabling and need factors, as explained by Anderson’s healthcare utilisation model. Regarding predisposing factors, ophthalmic service utilisation was associated with age but not with gender. Enabling factors such as district of residence, level of education and monthly income (wealth) were associated with an increased likelihood of seeking ophthalmic services. With regard to the need factors, we found that participants with diabetes, hypertension, vision problems and those who think regular eye examinations are important are associated with eye care utilisation. There is a need for effective public health programmes to address the socio-economic and individual barriers hindering the uptake of ophthalmic services in the Ashanti region of Ghana. The importance of timeously seeking eye care services in preventing visual impairments and blindness needs to be emphasised. The awareness of available eye care services in the region, with an emphasis on the implications of delays in seeking eye care, should be promoted intensively. Stakeholders should start public eye health campaigns on the annual, dedicated World Sight Day to raise awareness about the need for regular eye examinations. |
The Impact of Storage Conditions on DNA Preservation in Human Skeletal Remains: A Comparison of Freshly Excavated Samples and Those Stored for 12 Years in a Museum Depot | 763800aa-4eae-415b-8316-7f8b12bda1a5 | 11764964 | Forensic Medicine[mh] | Obtaining sufficient amounts of high-quality ancient DNA (aDNA) from human skeletons remains a significant challenge in forensic and archaeological genetics. Due to its inherent susceptibility to degradation, fragmentation, and exogenous contamination , the optimal preservation of skeletal remains is critical for ensuring the integrity of genetic material. Although numerous studies and reviews have addressed DNA extraction protocols from skeletal material , relatively little research has focused on the optimal storage conditions for human skeletal remains to preserve their integrity and prevent information loss over time. Skeletal remains, along with the DNA they harbor, begin to decay immediately after death . Like any material, skeletal remains function as an open system, exchanging energy and substances with their environment due to differences in composition. These interactions lead to energy instability and an imbalance between the material and its surroundings. To achieve stability, the chemical and structural properties of the material change. As equilibrium is approached, these changes diminish, eventually stabilizing the material for long-term preservation. However, this equilibrium can be disrupted by environmental changes—such as abrupt shifts during excavation or fluctuating storage conditions—or may never fully establish itself. In the absence of stability, continuous changes persist, eventually resulting in complete decay . Therefore, storing skeletal remains in a stable environment is essential for long-term preservation. The decay of skeletal remains is strongly influenced by environmental factors such as temperature, humidity, and pH . Current guidelines for storage recommend maintaining a maximum temperature of 16–20 °C and relative humidity between 45% and 65% while avoiding significant fluctuations . However, many museums are unable to adhere to these guidelines due to the size of their collections, limited storage space, and financial constraints. Furthermore, systematic research on the subject is lacking , leaving uncertainty about the optimal storage conditions for preserving human skeletal remains and their DNA. The consequences of storing these remains under unregulated conditions also remain poorly understood. This study investigates the impact of long-term storage under unregulated temperature and humidity conditions on DNA preservation in human skeletal remains. By comparing the quantity and integrity of DNA from freshly excavated skeletal remains with those stored for 12 years under seasonal fluctuations in temperature and humidity, this research aims to underscore the critical role of storage conditions in preserving the molecular integrity of human remains. 2.1. Description of Ljubljana’s Archaeological Sites Vrazov Trg and Njegoševa Ulica and Selection of Bone Samples Archaeological excavations at Ljubljana’s Vrazov trg campus began during the construction of new facilities for the Faculty of Medicine, as the site is located within a registered cultural heritage area. Results of preceding archaeological research at the site revealed the presence of cemetery remains in the courtyard of the building complex, dating back to the modern era. The archaeological exhumations were conducted from 1 June to 20 July of 2023 and uncovered 196 graves. The discovered cemetery is located on the southern side of the Church of St. Peter, the oldest church in the Diocese of Ljubljana, first mentioned in historical sources in 1163 . According to published data and previous exhumations conducted on the northern part of the church (Njegoševa archaeological site), the cemetery dates back to the 9th Century. The cemetery was active throughout the entire Middle Ages and into the modern era until the decree of Maria Theresa in 1784 and the subsequent decree of Joseph II in 1787, which mandated that cemeteries around churches be abolished. Based on stratigraphic evidence, the cemetery can be divided into multiple phases, with most of the grave pits attributed to the earliest phase, having been dug into a layer of humus . Ljubljana’s Njegoševa archaeological site is located on the northern site of The Church of St. Peter in close proximity to the archaeological site of Vrazov trg. The archaeological excavations in the area began in 2011 and ended in 2012. A total of 860 burials and only 287 grave pits were revealed, clustered around the church, meaning that more recent graves were probably dug into the older ones. The preliminary temporal classification of the skeletons was conducted based on the artifacts found in grave pits, stratigraphic evidence, and the arm positions of skeletons. A total of 132 skeletons were classified as early medieval. The burial ground remained active throughout the Middle Ages and expanded considerably in the Modern Age, between the 16th and 18th century, when the number of burials increased significantly. More than 600 skeletons, excavated at the site, were identified as belonging to the Modern Age . At both archaeological sites, most skeletons were oriented along the E–W axis with the head facing west. They were placed in simple wooden coffins, indicative of a typical Christian burial practice. During the exhumations, the skeletons were categorized, photographed, and assigned a number that was retained throughout the entire process. Ljubljana lies in the central region of Slovenia and experiences a continental climate with annual average temperatures (1948–2019) ranging between 8.6 and 12.6 °C. The absolute highest and lowest temperatures of the year are 40.2 °C and −23.3 °C, respectively ( https://meteo.arso.gov.si/met/sl/climate/diagrams/ljubljana/ , accessed on 10 October 2024). The excavation site is located in the city center of Ljubljana and geologically belongs to the central part of the Ljubljana Basin. A total of 38 skeletons from the archaeological site of Vrazov trg were selected for comparison with genetic data of 101 samples from Ljubljana’s Njegoševa site. To establish DNA preservation, only skeletons with preserved petrous bones were chosen. Genetic data for petrous bone samples from Ljubljana’s Njegoševa site were obtained in 2023, following genetic processing after 12 years of storage. The characteristics of bone samples are shown in —SM S1. Consent for sampling the skeletons from the archaeological site of Ljubljana’s Vrazov trg was obtained from the Museums and Galeries of Ljubljana (MGML). The research received approval from the National Medical Ethics Committee of the Republic of Slovenia (0120-308/2024-2711-3), the ethical approval date is 18 July 2024. 2.2. Storage of Bone Samples During anthropological analysis, skeletons were categorized and assigned a number that remained through the entire process. In the case of Ljubljana’s Vrazov trg, the bones were superficially cleaned of soil. Exhumations at the Vrazov trg site in Ljubljana were completed in July 2023, after which all dry-cleaned bone samples were delivered to our laboratory. The bone samples from the Njegoševa site were excavated in 2011 and were cleaned of soil at the time of excavation. After drying, skeletal remains were stored inside cardboard boxes in a museum depot in Ljubljana for approximately 12 years. The depot is not insulated so the environmental conditions affecting the preservation of samples fluctuate with the weather. The humidity and temperature in the storage room are not measured, but the latter is estimated to range between 5 °C and 35 °C. 2.3. DNA Extraction Petrous bones were sampled from the skeletons. Before cutting, all bones were chemically cleaned with 5% Alconox (Sigma-Aldrich, St. Louis, MO, USA), sterile bi-distilled water (Sigma-Aldrich), and 80% ethanol (Merck, Rahway, NJ, USA) to reduce surface contamination. Petrous bones that were still connected to the rest of the temporal bone were separated from it using a sterilized diamond bone saw (Schick, Schemmerhofen, Germany). The dense part of the petrous bone within the optic capsule was detached from the rest of the petrous bone following Pinhasi’s method and later used for extraction. Thin incisions were made on the surface of petrous bones to enhance the grinding process. Immediately before cutting, the bones were cooled with liquid nitrogen to prevent DNA degradation caused by the heat generated during cutting. Cut bone samples were then chemically cleaned again and left to dry overnight. Tools used in cutting and grinding bone samples were cleaned with 6% sodium hypochlorite, sterile bi-distilled water, and 80% ethanol, followed by sterilization with Europa B xp sterilizer (Tecno-Gaz, Parma, Italy) at 134 °C for 45 min and UV irradiation for 20 min using BLX-Multichannel BioLink DNA Crosslinker (Vilber, Collégien, France). To further prevent contamination, all reagents, with the exclusion of those labeled DNA-free or DNase-free water, bi-distilled water, ethanol, sodium hypochlorite, and laboratory plastics, were sterilized (autoclaved) and disinfected with UV irradiation for 20 min using BLX-Multichannel BioLink DNA Crosslinker (Vilber) before use. The working surfaces were also cleaned with 6% sodium hypochlorite, bi-distilled water, and ethanol and subsequently exposed to UV radiation for 20 min, before and after use. To ensure the integrity of samples, special measures were followed to avoid contamination of samples with contemporary DNA. Bone preparation and extraction procedures were spatially separated from the post-extraction ones . For the preparation of bone samples, a specifically designed space for handling old skeletal remains within a closed MC3 microbiological safety cabinet (Iskra Pio, Šentjernej, Slovenia), equipped with a HEPA filter and UV light, was used. DNA extraction was performed using 0.5 g of bone powder. Before grinding, the bones were cooled again to prevent the loss of DNA caused by bone overheating. A homogenizer (Bead Beater MillMix 20; Tehtnica, Domel, Železniki, Slovenija) was used to grind bone samples into a fine powder. The samples were cleaned, ground, decalcified, and purified in accordance with the highly efficient DNA extraction method previously described by Zupanič Pajnič . To control for possible contamination and to monitor the purity of reagents and plastics, extraction-negative controls (ENCs) were included in every batch of samples . To minimize the risk of cross-contamination, no more than 12 bone samples were processed in each extraction batch. Furthermore, an elimination database was formed, using samples of all individuals participating in the study, including personnel involved in excavation, anthropological analysis, and DNA analysis. The sampling consisted of collecting saliva using sterile cotton swabs and extracting DNA from buccal smears. To avoid contaminating bone samples with contemporary DNA, two different machines were used for purifying the elimination database samples and bone samples. A BioRobot EZ1 machine (Qiagen, Hilden, Germany) was used to purify DNA extracted from buccal smears, and an EZ1 Advanced XL machine (Qiagen), exclusively used in our laboratory for purifying DNA from aged bone, was used to purify DNA extracted from bone samples. The EZ1&2 DNA Investigator kit (Qiagen) was used for the purification of bone samples and buccal swabs following the previously published protocol and instructions provided by the manufacturer . 2.4. DNA Quantification Real-time PCR (qPCR) analysis was utilized to determine the DNA concentration and its degradation in petrous bones. The short autosomal fragment (85 bp, Auto target), conveying the concentration of nuclear DNA, the Y-chromosomal fragment (Y target), and the long autosomal fragment (294 bp, Deg target), was detected using the PowerQuant System (Promega Corporation, Madison, WI, USA) following the manufacturer’s instructions . Auto and Deg target values were used for calculating the DNA degradation index (Auto/Deg ratio). The PowerQuant System (Promega) also includes an internal PCR control (IPC) to detect the possible presence of inhibitors in the amplification reaction. QuantStudio 5 Real-Time PCR System and Quant-Studio Design and Analysis Software 1.5.1 (Applied Biosystems, Thermo Fischer Scientific, Waltham, MA, USA) were used to export and process the raw data. Auto, Deg, and Y values, Auto/Deg ratio, and IPC shift values, along with their associated standard curves, were determined with the PowerQuant Analysis Tool ( https://worldwide.promega.com/resources/tools/powerquant-analysis-tool/ , accessed on 15 October 2024). The IPC Shift threshold and the Auto/Deg threshold were set at 0.3 and 2, respectively, as per the manufacturer’s recommendations . The quantity of DNA extracted from 1 g of bone powder was determined for each sample based on the Auto target values. The result was converted into units of ng DNA/g of bone sample. The Auto target values were multiplied by a factor of 100 to account for the use of 0.5 g of bone powder and dilution of the extracted DNA in 50 µL of TE buffer. 2.5. Statistical Analysis A statistical comparison was conducted on bone samples from two archaeologically comparable sites, Ljubljana’s Vrazov trg site, located on the southern side of St. Peter’s Church, and the Njegoševa site, situated on the northern side of the same church. Both sites are historically and geographically equivalent, thus ensuring that all environmental factors affecting DNA preservation, excluding storage of skeletal remains, were as similar as possible. For the statistical analysis, two parameters were used to describe DNA preservation in bone samples: the Auto target, representing DNA quantity, and the Auto/Deg ratio, representing DNA quality. All parameter values were acquired using the PowerQuant Analysis tool (Promega). Statistical analysis was performed using IBM SPSS Statistics, version 28.0. To explore the effect of storage of skeletal remains after excavation on DNA preservation, the following research hypotheses were formulated: Hypothesis 1. There are no statistically significant differences between petrous bones from Njegoševa and Vrazov trg in the amount of DNA extracted (ng DNA/g bone). Hypothesis 2. There are no statistically significant differences between petrous bones from Njegoševa and Vrazov trg in the degradation ratio (Auto/Deg). The normality and homogeneity of variance were tested using the Kolmogorov–Smirnov test (with Lilliefors significance correction). The research hypotheses were tested using the 95% confidence intervals for means or medians, as suggested as an appropriate measure for testing the differences among groups, especially in medical studies , using the computer program IBM SPSS Statistics for Windows, version 28.0 (Statistical Package for the Social Sciences Inc., Chicago, IL, USA). As the sample size is relatively small, confidence intervals can have limited power to detect significant differences . Thus, formulated hypotheses were also tested using p values. Significance was set as p ≤ 0.05. The database contained data from bone samples of 139 individuals, including 101 petrous bones from the Njegoševa site and 38 from the Vrazov trg site. Kolmogorov–Smirnov test showed that data are not normally distributed. Thus, non-parametric tests were performed, and medians were used for the confidence intervals. Archaeological excavations at Ljubljana’s Vrazov trg campus began during the construction of new facilities for the Faculty of Medicine, as the site is located within a registered cultural heritage area. Results of preceding archaeological research at the site revealed the presence of cemetery remains in the courtyard of the building complex, dating back to the modern era. The archaeological exhumations were conducted from 1 June to 20 July of 2023 and uncovered 196 graves. The discovered cemetery is located on the southern side of the Church of St. Peter, the oldest church in the Diocese of Ljubljana, first mentioned in historical sources in 1163 . According to published data and previous exhumations conducted on the northern part of the church (Njegoševa archaeological site), the cemetery dates back to the 9th Century. The cemetery was active throughout the entire Middle Ages and into the modern era until the decree of Maria Theresa in 1784 and the subsequent decree of Joseph II in 1787, which mandated that cemeteries around churches be abolished. Based on stratigraphic evidence, the cemetery can be divided into multiple phases, with most of the grave pits attributed to the earliest phase, having been dug into a layer of humus . Ljubljana’s Njegoševa archaeological site is located on the northern site of The Church of St. Peter in close proximity to the archaeological site of Vrazov trg. The archaeological excavations in the area began in 2011 and ended in 2012. A total of 860 burials and only 287 grave pits were revealed, clustered around the church, meaning that more recent graves were probably dug into the older ones. The preliminary temporal classification of the skeletons was conducted based on the artifacts found in grave pits, stratigraphic evidence, and the arm positions of skeletons. A total of 132 skeletons were classified as early medieval. The burial ground remained active throughout the Middle Ages and expanded considerably in the Modern Age, between the 16th and 18th century, when the number of burials increased significantly. More than 600 skeletons, excavated at the site, were identified as belonging to the Modern Age . At both archaeological sites, most skeletons were oriented along the E–W axis with the head facing west. They were placed in simple wooden coffins, indicative of a typical Christian burial practice. During the exhumations, the skeletons were categorized, photographed, and assigned a number that was retained throughout the entire process. Ljubljana lies in the central region of Slovenia and experiences a continental climate with annual average temperatures (1948–2019) ranging between 8.6 and 12.6 °C. The absolute highest and lowest temperatures of the year are 40.2 °C and −23.3 °C, respectively ( https://meteo.arso.gov.si/met/sl/climate/diagrams/ljubljana/ , accessed on 10 October 2024). The excavation site is located in the city center of Ljubljana and geologically belongs to the central part of the Ljubljana Basin. A total of 38 skeletons from the archaeological site of Vrazov trg were selected for comparison with genetic data of 101 samples from Ljubljana’s Njegoševa site. To establish DNA preservation, only skeletons with preserved petrous bones were chosen. Genetic data for petrous bone samples from Ljubljana’s Njegoševa site were obtained in 2023, following genetic processing after 12 years of storage. The characteristics of bone samples are shown in —SM S1. Consent for sampling the skeletons from the archaeological site of Ljubljana’s Vrazov trg was obtained from the Museums and Galeries of Ljubljana (MGML). The research received approval from the National Medical Ethics Committee of the Republic of Slovenia (0120-308/2024-2711-3), the ethical approval date is 18 July 2024. During anthropological analysis, skeletons were categorized and assigned a number that remained through the entire process. In the case of Ljubljana’s Vrazov trg, the bones were superficially cleaned of soil. Exhumations at the Vrazov trg site in Ljubljana were completed in July 2023, after which all dry-cleaned bone samples were delivered to our laboratory. The bone samples from the Njegoševa site were excavated in 2011 and were cleaned of soil at the time of excavation. After drying, skeletal remains were stored inside cardboard boxes in a museum depot in Ljubljana for approximately 12 years. The depot is not insulated so the environmental conditions affecting the preservation of samples fluctuate with the weather. The humidity and temperature in the storage room are not measured, but the latter is estimated to range between 5 °C and 35 °C. Petrous bones were sampled from the skeletons. Before cutting, all bones were chemically cleaned with 5% Alconox (Sigma-Aldrich, St. Louis, MO, USA), sterile bi-distilled water (Sigma-Aldrich), and 80% ethanol (Merck, Rahway, NJ, USA) to reduce surface contamination. Petrous bones that were still connected to the rest of the temporal bone were separated from it using a sterilized diamond bone saw (Schick, Schemmerhofen, Germany). The dense part of the petrous bone within the optic capsule was detached from the rest of the petrous bone following Pinhasi’s method and later used for extraction. Thin incisions were made on the surface of petrous bones to enhance the grinding process. Immediately before cutting, the bones were cooled with liquid nitrogen to prevent DNA degradation caused by the heat generated during cutting. Cut bone samples were then chemically cleaned again and left to dry overnight. Tools used in cutting and grinding bone samples were cleaned with 6% sodium hypochlorite, sterile bi-distilled water, and 80% ethanol, followed by sterilization with Europa B xp sterilizer (Tecno-Gaz, Parma, Italy) at 134 °C for 45 min and UV irradiation for 20 min using BLX-Multichannel BioLink DNA Crosslinker (Vilber, Collégien, France). To further prevent contamination, all reagents, with the exclusion of those labeled DNA-free or DNase-free water, bi-distilled water, ethanol, sodium hypochlorite, and laboratory plastics, were sterilized (autoclaved) and disinfected with UV irradiation for 20 min using BLX-Multichannel BioLink DNA Crosslinker (Vilber) before use. The working surfaces were also cleaned with 6% sodium hypochlorite, bi-distilled water, and ethanol and subsequently exposed to UV radiation for 20 min, before and after use. To ensure the integrity of samples, special measures were followed to avoid contamination of samples with contemporary DNA. Bone preparation and extraction procedures were spatially separated from the post-extraction ones . For the preparation of bone samples, a specifically designed space for handling old skeletal remains within a closed MC3 microbiological safety cabinet (Iskra Pio, Šentjernej, Slovenia), equipped with a HEPA filter and UV light, was used. DNA extraction was performed using 0.5 g of bone powder. Before grinding, the bones were cooled again to prevent the loss of DNA caused by bone overheating. A homogenizer (Bead Beater MillMix 20; Tehtnica, Domel, Železniki, Slovenija) was used to grind bone samples into a fine powder. The samples were cleaned, ground, decalcified, and purified in accordance with the highly efficient DNA extraction method previously described by Zupanič Pajnič . To control for possible contamination and to monitor the purity of reagents and plastics, extraction-negative controls (ENCs) were included in every batch of samples . To minimize the risk of cross-contamination, no more than 12 bone samples were processed in each extraction batch. Furthermore, an elimination database was formed, using samples of all individuals participating in the study, including personnel involved in excavation, anthropological analysis, and DNA analysis. The sampling consisted of collecting saliva using sterile cotton swabs and extracting DNA from buccal smears. To avoid contaminating bone samples with contemporary DNA, two different machines were used for purifying the elimination database samples and bone samples. A BioRobot EZ1 machine (Qiagen, Hilden, Germany) was used to purify DNA extracted from buccal smears, and an EZ1 Advanced XL machine (Qiagen), exclusively used in our laboratory for purifying DNA from aged bone, was used to purify DNA extracted from bone samples. The EZ1&2 DNA Investigator kit (Qiagen) was used for the purification of bone samples and buccal swabs following the previously published protocol and instructions provided by the manufacturer . Real-time PCR (qPCR) analysis was utilized to determine the DNA concentration and its degradation in petrous bones. The short autosomal fragment (85 bp, Auto target), conveying the concentration of nuclear DNA, the Y-chromosomal fragment (Y target), and the long autosomal fragment (294 bp, Deg target), was detected using the PowerQuant System (Promega Corporation, Madison, WI, USA) following the manufacturer’s instructions . Auto and Deg target values were used for calculating the DNA degradation index (Auto/Deg ratio). The PowerQuant System (Promega) also includes an internal PCR control (IPC) to detect the possible presence of inhibitors in the amplification reaction. QuantStudio 5 Real-Time PCR System and Quant-Studio Design and Analysis Software 1.5.1 (Applied Biosystems, Thermo Fischer Scientific, Waltham, MA, USA) were used to export and process the raw data. Auto, Deg, and Y values, Auto/Deg ratio, and IPC shift values, along with their associated standard curves, were determined with the PowerQuant Analysis Tool ( https://worldwide.promega.com/resources/tools/powerquant-analysis-tool/ , accessed on 15 October 2024). The IPC Shift threshold and the Auto/Deg threshold were set at 0.3 and 2, respectively, as per the manufacturer’s recommendations . The quantity of DNA extracted from 1 g of bone powder was determined for each sample based on the Auto target values. The result was converted into units of ng DNA/g of bone sample. The Auto target values were multiplied by a factor of 100 to account for the use of 0.5 g of bone powder and dilution of the extracted DNA in 50 µL of TE buffer. A statistical comparison was conducted on bone samples from two archaeologically comparable sites, Ljubljana’s Vrazov trg site, located on the southern side of St. Peter’s Church, and the Njegoševa site, situated on the northern side of the same church. Both sites are historically and geographically equivalent, thus ensuring that all environmental factors affecting DNA preservation, excluding storage of skeletal remains, were as similar as possible. For the statistical analysis, two parameters were used to describe DNA preservation in bone samples: the Auto target, representing DNA quantity, and the Auto/Deg ratio, representing DNA quality. All parameter values were acquired using the PowerQuant Analysis tool (Promega). Statistical analysis was performed using IBM SPSS Statistics, version 28.0. To explore the effect of storage of skeletal remains after excavation on DNA preservation, the following research hypotheses were formulated: Hypothesis 1. There are no statistically significant differences between petrous bones from Njegoševa and Vrazov trg in the amount of DNA extracted (ng DNA/g bone). Hypothesis 2. There are no statistically significant differences between petrous bones from Njegoševa and Vrazov trg in the degradation ratio (Auto/Deg). The normality and homogeneity of variance were tested using the Kolmogorov–Smirnov test (with Lilliefors significance correction). The research hypotheses were tested using the 95% confidence intervals for means or medians, as suggested as an appropriate measure for testing the differences among groups, especially in medical studies , using the computer program IBM SPSS Statistics for Windows, version 28.0 (Statistical Package for the Social Sciences Inc., Chicago, IL, USA). As the sample size is relatively small, confidence intervals can have limited power to detect significant differences . Thus, formulated hypotheses were also tested using p values. Significance was set as p ≤ 0.05. The database contained data from bone samples of 139 individuals, including 101 petrous bones from the Njegoševa site and 38 from the Vrazov trg site. Kolmogorov–Smirnov test showed that data are not normally distributed. Thus, non-parametric tests were performed, and medians were used for the confidence intervals. 3.1. DNA Quantification Results obtained with PowerQuant System (Promega) including bone sample characteristics and parameters of DNA quality and quantity, such as the Auto/Deg ratio, IPC Shift, Auto, Deg, and Y target, are shown in . Values for Auto, Deg, and Y targets are conveyed in units of ng DNA/μL of extract. Additionally, the DNA yield was calculated and expressed in ng DNA/g of bone. To minimize the error and variability in samples, all amplification reactions were performed in duplicates, and the duplicate average was used as the basis for all subsequent calculations. According to developmental validation by Ewing et al. , 0.5 pg of DNA per μL of extract is the minimum concentration recommended for reliable quantification with the PowerQuant qPCR kit (Promega). More than 0.5 pg DNA/μL of extract was detected in all bone samples apart from one, excavated from Ljubljana’s Vrazov trg archaeological site (see ). DNA degradation varied between bone samples and was highest in samples originating from the Njegoševa archaeological site, with Auto/Deg values reaching 387.70 and an average of 71.84. In contrast, samples from the site of Vrazov trg showed lower degradation levels, with maximum Auto/Deg values of 332.57, with an average of 51.68. The presence of inhibitors was detected in nine samples when the IPC Shift value met or exceeded the IPC Shift threshold value of 0.3 (see ). In the remaining samples, the IPC value was below 0.3, indicating that purification with magnetic bead technology in the EZ1&2 DNA Investigator Kit (Qiagen) was highly efficient. In most ENC samples, no PowerQuant targets were detected. In cases where an amplification product was detected, the amount of DNA did not exceed the detection limit for the PowerQuant kit, showing no contamination issues. 3.2. Statistical Analysis Statistical analysis was performed to determine possible variations in the DNA yield and degree of DNA degradation for samples originating from historically and geographically equivalent archaeological sites—Ljubljana’s Vrazov trg and Njegoševa sites—which differed in times of excavation and, consequently, in storage durations and conditions. When comparing the DNA yield and the degree of DNA degradation for petrous bones excavated from Ljubljana’s archaeological sites of Vrazov trg and Njegoševa, two hypotheses were tested to evaluate whether the two parameters exhibit any statistically significant differences. Based on the combined results of the independent-sample median test and confidence intervals, both formulated hypotheses should be rejected. The results of nonparametric statistic testing showed significant differences ( p < 0.001) in the DNA yield, while the differences in the Auto/Deg ratio were borderline significant ( p = 0.053). Petrous bones from Vrazov trg yielded more DNA (Mean = 31.64, Standard Error = 3.22) compared to those from Njegoševa (Mean = 17.66, Standard Error = 1.44) . The Auto/Deg ratio was higher in petrous bones from Njegoševa (Mean = 71.84, Standard Error = 6.73) when compared to those from Vrazov trg (Mean = 51.68, Standard Error = 9.33), showing the difference in DNA degradation between samples from previously mentioned sites . Descriptive statistics, test statistics, and tests for the normality of the distribution of the DNA yield and the degradation ratio can be seen in . Results obtained with PowerQuant System (Promega) including bone sample characteristics and parameters of DNA quality and quantity, such as the Auto/Deg ratio, IPC Shift, Auto, Deg, and Y target, are shown in . Values for Auto, Deg, and Y targets are conveyed in units of ng DNA/μL of extract. Additionally, the DNA yield was calculated and expressed in ng DNA/g of bone. To minimize the error and variability in samples, all amplification reactions were performed in duplicates, and the duplicate average was used as the basis for all subsequent calculations. According to developmental validation by Ewing et al. , 0.5 pg of DNA per μL of extract is the minimum concentration recommended for reliable quantification with the PowerQuant qPCR kit (Promega). More than 0.5 pg DNA/μL of extract was detected in all bone samples apart from one, excavated from Ljubljana’s Vrazov trg archaeological site (see ). DNA degradation varied between bone samples and was highest in samples originating from the Njegoševa archaeological site, with Auto/Deg values reaching 387.70 and an average of 71.84. In contrast, samples from the site of Vrazov trg showed lower degradation levels, with maximum Auto/Deg values of 332.57, with an average of 51.68. The presence of inhibitors was detected in nine samples when the IPC Shift value met or exceeded the IPC Shift threshold value of 0.3 (see ). In the remaining samples, the IPC value was below 0.3, indicating that purification with magnetic bead technology in the EZ1&2 DNA Investigator Kit (Qiagen) was highly efficient. In most ENC samples, no PowerQuant targets were detected. In cases where an amplification product was detected, the amount of DNA did not exceed the detection limit for the PowerQuant kit, showing no contamination issues. Statistical analysis was performed to determine possible variations in the DNA yield and degree of DNA degradation for samples originating from historically and geographically equivalent archaeological sites—Ljubljana’s Vrazov trg and Njegoševa sites—which differed in times of excavation and, consequently, in storage durations and conditions. When comparing the DNA yield and the degree of DNA degradation for petrous bones excavated from Ljubljana’s archaeological sites of Vrazov trg and Njegoševa, two hypotheses were tested to evaluate whether the two parameters exhibit any statistically significant differences. Based on the combined results of the independent-sample median test and confidence intervals, both formulated hypotheses should be rejected. The results of nonparametric statistic testing showed significant differences ( p < 0.001) in the DNA yield, while the differences in the Auto/Deg ratio were borderline significant ( p = 0.053). Petrous bones from Vrazov trg yielded more DNA (Mean = 31.64, Standard Error = 3.22) compared to those from Njegoševa (Mean = 17.66, Standard Error = 1.44) . The Auto/Deg ratio was higher in petrous bones from Njegoševa (Mean = 71.84, Standard Error = 6.73) when compared to those from Vrazov trg (Mean = 51.68, Standard Error = 9.33), showing the difference in DNA degradation between samples from previously mentioned sites . Descriptive statistics, test statistics, and tests for the normality of the distribution of the DNA yield and the degradation ratio can be seen in . While advancements in forensic and ancient DNA analysis continue to identify new genetic markers, the issue of sample preservation persists as a critical concern. During and after excavation, skeletal remains are subjected to an abrupt change in environmental conditions that profoundly impact the stability and integrity of bone DNA. This underscores the critical need for implementing effective preventive measures and ensuring proper storage protocols to maintain sample integrity. Such measures should mitigate DNA damage caused by spontaneous degradation and are particularly important in cases where DNA is already highly compromised, as seen in old and poorly preserved skeletal remains. As observed by Pruvost et al. , standard storage procedures can adversely affect DNA survival in fossil bones. To minimize DNA degradation, bone samples are commonly stored at −20 °C, as low temperature inhibits enzymatic activity and microbial proliferation . However, recent studies have reported changes in bone crystallinity and increased DNA degradation associated with temperature fluctuations during freeze–thawing cycles . Additionally, long-term freezer storage presents a significant financial burden for facilities storing human skeletal remains. Consequently, long-term storage at room temperature can be advantageous for facilities managing skeletal remains, but it is essential that a strict control of temperature and humidity is maintained to ensure the preservation of valuable genetic material. When investigating the effect of storage at unregulated temperatures and humidity on the DNA yield and preservation in bone samples, our results revealed significant differences between freshly excavated samples and those stored at a museum depot for 12 years. A significant reduction in recovered DNA as well as an increase in DNA degradation can be observed for samples stored at unregulated temperature and humidity for 12 years. Since all samples in this study originated from the same geographical location and had equivalent post-mortem intervals, we can assume that the samples experienced no difference in other environmental factors affecting DNA survival, such as soil pH, external temperature, or hydrological conditions at the burial site. Our results, consistent with several other studies discussing the issue of storage of skeletal material , therefore suggest that storage conditions, especially unregulated temperature and humidity, detrimentally affect the amount and integrity of extracted DNA. Additionally, it is reasonable to conclude that the fluctuations in temperature and humidity are better shielded while the skeletal remains are still buried in the soil as it provides thermal insulation and regulated moisture levels. Indeed, soil represents a highly complex burial environment and includes many factors that can potentially influence and interfere with bone preservation . The buffering effect most likely results in a stabilization of the microenvironment surrounding the remains and better DNA preservation in the case of freshly excavated skeletons in our study. Howes et al. reported similar findings in their study on bone degradation in different soil environments. They observed that while skeletal remains are still embedded in the soil, moisture content and temperature have a minimal impact on the preservation of organic components in bone tissue . Based on our findings, we propose that archaeological skeletal remains be sampled for further genetic analysis immediately after excavation to avoid any DNA loss in the storage period. We advise that in cases where direct sampling after excavation is not possible, bone fragments are stored in museum depots following current scientific recommendations. According to our findings, unregulated temperatures and humidity pose a serious threat to DNA preservation in stored skeletal material. Building on previous research, which demonstrated a significant reduction in the DNA yield from bone fragments stored in freezers for 10 years , we join the recommendations of some of the world’s leading museums regarding the optimal temperature and humidity for long-term storage of skeletal remains. Storage spaces should maintain a stable temperature between 16 and 20 °C, with relative humidity controlled within a range of 45–65%, as this range prevents mold growth, which occurs under excessively humid conditions while also avoiding bone cracking caused by low humidity . In addition to humidity and temperature extremes, the fluctuation of both is known to enhance bone weathering and should be avoided . To minimize these effects, the storage area should be protected against daily moderate temperature fluctuations as well as more extreme seasonal changes as both could contribute to the degradation of DNA in bone samples. Furthermore, other factors promoting DNA decay, such as exposure to UV light, should also be considered. Since UV light is known to induce DNA mutations and has more recently been shown to reduce bone density in exposed samples , it is crucial to store skeletal remains in environments with minimal or no sunlight exposure. In this study, we aimed to investigate the impact of unregulated temperatures and humidity in long-term storage on DNA preservation in human skeletal material. To assess DNA preservation, the DNA yield and degree of DNA degradation were determined. According to our results, unregulated conditions in long-term storage detrimentally contribute to DNA degradation in stored bone samples. Considering the importance of reducing costs while sustaining DNA integrity in bone samples, we recommend that facilities storing human skeletal material regulate temperature and humidity levels within scientifically recommended ranges to ensure optimal DNA preservation. Key points Effective storage protocols for bone samples are needed to ensure optimal DNA preservation. Freshly excavated petrous bones and petrous bones stored in a museum depot were sampled. The DNA yield and degree of DNA degradation were compared for 186 bone samples. Freshly excavated bones were found to have a higher DNA yield and a lower degree of DNA degradation. |
Phenotypic and genotypic screening of multidrug resistant | 1b2b93d0-c487-4972-b519-d0ee402a67d6 | 11796132 | Microbiology[mh] | Ready-To-Eat-Street-Foods (RTESF) are defined as foods for immediate consumption or use with no need for further processing or preparation and are sold as common street foods in small roadside outlets . There is a higher dependency on this type of food due to convenience and acceptance by consumers additionally, RTESF save time and considered inexpensive . Unfortunately, the consumption of street food increases the potential risk of foodborne illnesses such as diarrhea or traveler’s diarrhea . FBDs are considered a public health concern in many countries as contaminated food was reported to be responsible for up to 600 million FBDs and an estimated global burden of up to 33 million disability-adjusted life years . Moreover, more than 91 million African people are affected by foodborne diseases according to the report by the World Health Organization (WHO) . In Africa, the prevalence of food poisoning is underestimated, since people with gastrointestinal symptoms rarely go to health facilities . Salads are considered minimally processed foods because they only undergo washing, peeling, chopping, drying, and packaging—no heat treatment . In addition to that, processes like cutting and peeling might disrupt their exterior natural barrier and release plant juices that encourage microbial growth, so fruits and vegetables are more vulnerable to microbial contamination and proliferation .It was reported that vegetables have also been implicated as carriers of foodborne pathogens, such as Salmonella species, Campylobacter species, Listeria monocytogenes , and enterohaemorrhagic strains of Escherichia coli . These pathogens can affect vegetables before (water, soil, manure, insects, handling) or after (water, peeling, cutting, packaging, handling) harvest . Furthermore, a growing number of foodborne illness outbreaks have been connected to the consumption of fresh produce and minimally processed fruits and vegetables throughout the food chain . Notably, it has been observed that bacteria that produce extended spectrum β-lactamases (ESBLs), particularly those belonging to the Enterobacteriaceae family ( Klebsiella pneumoniae and E. coli ) have been reported in leafy vegetables . Thus, it’s critical to check the microbiological quality on the fresh-cut packed salads. K. pneumoniae is a widespread opportunistic bacterium that causes several human and animal diseases including meningitis, bronchitis, bacteremia, pneumonia, and urinary tract infection . K. pneumoniae is considered a common contaminant in many food items including meat, fresh vegetables, and fish so it has been regarded as a significant foodborne pathogen . Resistance to numerous routinely-available drugs has been considered as a public health concern (associated with increased mortality, length of stay and increased cost) due to increased prevalence of extended-spectrum ß-lactamases (ESBLs) and plasmid-mediated AmpCß-lactamase enzyme-producing pathogens .ß-lactamases are mainly reported in Gram-negative bacteria, specifically, Escherichia coli and K. pneumoniae . These bacteria could represent a threat for consumers since they may disseminate during food production and processing . Animals have been identified as the main reservoir for ESBLs-producing microorganisms, and foods may contribute to the spread of resistance to humans through the food chain . Antibiotic-resistant bacteria are disseminated also through the fecal animal waste around farms, slaughterhouses, and meat processing units . Several studies on K. pneumoniae isolated from food have reported its multidrug resistant (MDR) phenotype . Clinical management has become more challenging due to the emergence of MDR among K. pneumoniae strains that led to increased patient morbidity and mortality , therefore it should be taken seriously. So, the WHO, in its global action plan against antimicrobial resistance, has identified that food is one of the potential vehicles for transmission of antimicrobial-resistant bacteria to humans. Further, the human consumption of food carrying resistant bacteria has led to the acquisition of antibiotic-resistant infections . Additionally, the WHO has categorized Enterobacteriaceae that are resistant to carbapenems and third-generation cephalosporins, including K. pneumoniae , as critical priority pathogens on its list of antibiotic-resistant bacteria that require new treatments. Also the increased prevalence of ESBLs-producing foodborne bacteria in RTESTF is considered a serious risk . Beside resistance, hypervirulent K. pneumoniae (hvKP) has been emerged as a serious clinical pathogen since it causes a plethora of community-acquired infections . Additionally, HvKP utilizes a battery of virulence factors for survival and pathogenesis, such as capsule, siderophores, lipopolysaccharide, fimbriae, outer membrane proteins, and type 6 secretion system . Among the most predominant virulence factors is capsular polysaccharide which increases resistance to phagocyosis .Another important virulence determinants in hvKP, the mucoviscosity-associated gene A ( magA ) and the regulator of the mucoid phenotype A ( rmpA ) genes were related to serious invasive infections . Furthermore, siderophores are considered a crucial bacterial virulence factors. The most prevalent siderophore systems are enterobactin ( entB ), aerobactin ( iutA ), yersiniabactin ( ybtS ), and kfu gene that encode ferric iron uptake. Likewise, another K. pneumoniae virulence determinant is fimbriae which are proteins that can identify a broad range of molecular motifs and guide the bacteria to specific tissue surfaces in the host . Type 1 and type 3 fimbriae (Mrk) grant the attachment of K. pneumoniae to cells of the respiratory and urinary tracts . Therefore, in order to prevent these RTESF from spreading to other areas of the environment, food and water screening is necessary.Despite the high risk of transmission to humans through consuming contaminated food, only few studies have been done; hence, the available data is limited. K. pneumoniae is recognized as one of the most important Gram-negative opportunistic pathogens, nevertheless, knowledge of the mechanism whereby this bacterium causes different diseases is still unclear and most studies have several limitations because of narrow ranges of virulence factors investigated . Therefore, the current study aimed to assess the prevalence, antimicrobial resistance of K. pneumoniae isolated from RTESF in Egypt in addition to scrutinizing the presence of virulence genes. Samples collection and preparation The duration of this study was about 2-years (from January 2021 to December 2022) and was conducted in Tanta, Egypt. Sample size of (242) RTEST (green salad) was calculated by epi info 2000 software based on prevalence of outcome (36%) at confidence interval 95% and 90% power of the study. A total of 242 RTEST (green salad) samples were collected from 242 different food suppliers, placed in separate sterile plastic bags, then they were transferred into ice box directly after purchasing and within 24 h, they were moved to the microbiological lab for investigation.Twenty-five (25) gram from each green salad sample were mixed with 225 mL buffered peptone water and then homogenized for 2 min in a laboratory blender Stomacher 400 Circulator (Seward Ltd., Worthing, UK) . Enumeration of the total bacterial number by viable count technique The experiment was conducted as previously reported . One mL of each sample homogenate was added to 9 mL sterile distilled water and seven dilutions were made for each sample. Under the aseptic technique, only 0.1 mL of the diluted sample was pipetted out into the sterile nutrient agar plate and distributed gently using L-spreader, and the plate was left to dry and then incubated at 37ºC for 24 h. Finally, the total viable colonies were counted by (LEICA QUEBEC DARKFIELD COLONY COUNTER MODEL 3325) and expressed in CFU/mL. Isolationof different organisms and identification of K. Pneumoniae Isolates from street food samples A 100 µl of the ten-fold dilution of each sample homogenate was streaked on the following media: MacConkey agar, Eosin Methylene Blue (EMB)for isolation of enterobacteriaceae (positive and negative lactose fermenters), Mannitol Salt Agar (MSA) for isolation of Staphylococcus species and phenol red egg yolk polymyxin agar (PREP) for isolation of Bacillus spp (all media were purchased from OXOID, UK).The experiment was carried out as described elsewhere . K. Pneumoniae isolates were identified based on Gram’s staining, colony character and different array of standard conventional biochemical tests like Indole, Methyl red, Voges-proskauer, Citrate utilization, Triple sugar iron agar (TSI), Oxidase, Catalase, Capsule, Motility and Urease test . Antimicrobial susceptibility testing The antibiotic susceptibility testing was done and interpreted using standard Kirby-Bauer disk diffusion technique according to the Clinical and Laboratory Standards Institute (CLSI) . In this study, a panel of 16 different commercially available antibiotic disks (HiMedia, India) was used. Antibiotics used were cefoperazon (CEP-75 µg), amikacin (AK, 30 µg), amoxicillin/clavulanic acid (AMC, 20/10µg), ampicillin/sulbactam (SAM, 20 µg), cefoxitin (FOX, 30 µg), ceftriaxone (CRO, 30 µg), cefotaxime (CTX, 30 µg), cefuroxime (CXM, 30 µg), co-trimoxazole (COT, 25 µg), chloramphenicol (C, 30 µg), tobramycin (TOB, 15 µg), imipenem (IPM, 10 µg), meropenem (MEM, 10 µg), piperacillin-tazobactam (TPZ, 100/10µg), tetracycline (TE, 30 µg), norfloxacin (NOR, 10 µg). In addition, the minimum inhibitory concentrations (MICs) of imipenem was estimated by using broth microdilution technique according to CLSI and Escherichia coli ATCC 25,922 was used as a control. Phenotypic detection of extended-spectrum β-lactamases (ESBLs) The detection of ESBLs was done using double disk synergy test (DDST) as previously reported . The isolates were swabbed onto Mueller Hinton Agar (MHA) and tested for antibiotic resistance to amoxicillin/clavulanic acid (20 µg/10µg), ceftazidime (30 µg/mL), and cefotaxime (30 µg/mL). Upon incubation at 37 °C for 18–24 h ESBLs production was detected by the formation of zone of inhibition around the cephalosporins that increases towards the amoxicillin/clavulanic acid (20 µg/10µg), resulting in synergy formation . Phenotypic detection of carbapenemases by Triton Hodge Test (THT) The experiment was performed according to Khalil et al. . Approximately 50 µL of pure Triton X-100 (Sigma-Aldrich, St. Louis, MO, USA) was poured onto the center of MHA plate and immediately coated across the entire plate in 4 to 6 directions. Afterwards, for around 10 min, the plate was left undisturbed until the reagent was entirely absorbed. The test was carried out using meropenem disc (10 µg). Additionally, K. pneumoniae ATCC BAA-1705 and ATCC BAA-1706 strains were used as positive and negative controls, respectively. Biofilm formation test The biofilm formation ability of K. pneumoniae isolates was tested using the microtiter plate technique as previously described . Briefly,180 mL of Luria-Bertani (LB) broth containing 1% glucose and 20 mL of fresh bacterial culture were added to sterile 96-well flat-based microtiter plates. Sterile LB, supplemented with 1% glucose, was used as a negative control, while K. pneumoniae ATCC 13,883 was considered as a positive control. After incubation at 37ºC for 18 h, each well was successively rinsed with phosphate-buffered saline (PBS). Before staining with crystal violet (2%), wells were dried at 60 °C for 1 h. Subsequently, glacial acetic acid 33% (v/v) was used to solubilize the bound dye and the absorbance was estimated at 570 nm (OD570). The experiment was performed three times, and the average reading was considered . Based on the obtained ODs, strains were classified into four groups namely, strong, moderate, weak biofilm producers, and non- biofilm producers . It was reported that the cutoff OD (ODc) was described as the mean OD of the negative control + three standard deviations. The degree of the formed biofilm was reported as follows: strong biofilm formation (OD > 4×ODc), moderate biofilm formation (2×ODc < OD < 4×ODc), weak biofilm formation (ODc < OD < 2×ODc), and non- biofilm formation (OD < ODc) . Detection of genes encoding β-Lactamases by polymerase chain reaction (PCR) Two multiplex PCR assays were used for detection of ESBLs genes: one multiplex assay comprised bla TEM / bla SHV / bla OXA− 1 and a second one comprised bla CTX−M (including phylogenetic groups 1, 2 and 9) . On the other hand, one uniplex PCR was used for detection of bla CTX - M−8/−25 . The genomic DNA was extracted as previously described . Amplification was carried out as follow: initial denaturation at 95 °C for 5 min; 30 cycles of denaturation at 94 °C for 30s, annealing at 56 °C for 30s, and extension at 72 °C for 1 min; and a final extension at 72 °C for 10 min. PCR amplicons were separated electrophoretically on 1.2% agarose gel with ethidium bromide dye and visualized under UV light. For quality control of PCR assay, known control organisms harboring bla CTX - M , bla TEM , and bla SHV were included as positive controls in each run . All primers used are listed in Table . Detection of virulence genes of K. pneumoniae Multiplex PCR was used for detection of nine virulence genes in K. pneumoniae which were the following, ( ybtS , mrkD , entB , rmpA , K2 , kfu , allS , iutA , and magA) . Table showed the primer sequence, annealing temperature, the product size, and the concentration of the used primers.The PCR reactions were performed as previously described by . Positive and negative controls were involved in all PCR assays. The amplicons were separated at 100 V for 2 h in a 1.2% (wt/vol) agarose gel containing ethidium bromide . Statistical analysis Data (antibiotic resistance, biofilm-producers, resistance genes, and virulence genes) analysis was performed using the Statistical Package for the Social Sciences software version 22 (SPSS Inc., Chicago, IL, USA). Data were evaluated using chi square test to compare between more than two qualitative groups. All statistical tests were two-sided. Figures , , and were prepared using GraphPad Prism software 5.0. The significance of differences was determined at p ≤ 0.05. The duration of this study was about 2-years (from January 2021 to December 2022) and was conducted in Tanta, Egypt. Sample size of (242) RTEST (green salad) was calculated by epi info 2000 software based on prevalence of outcome (36%) at confidence interval 95% and 90% power of the study. A total of 242 RTEST (green salad) samples were collected from 242 different food suppliers, placed in separate sterile plastic bags, then they were transferred into ice box directly after purchasing and within 24 h, they were moved to the microbiological lab for investigation.Twenty-five (25) gram from each green salad sample were mixed with 225 mL buffered peptone water and then homogenized for 2 min in a laboratory blender Stomacher 400 Circulator (Seward Ltd., Worthing, UK) . The experiment was conducted as previously reported . One mL of each sample homogenate was added to 9 mL sterile distilled water and seven dilutions were made for each sample. Under the aseptic technique, only 0.1 mL of the diluted sample was pipetted out into the sterile nutrient agar plate and distributed gently using L-spreader, and the plate was left to dry and then incubated at 37ºC for 24 h. Finally, the total viable colonies were counted by (LEICA QUEBEC DARKFIELD COLONY COUNTER MODEL 3325) and expressed in CFU/mL. K. Pneumoniae Isolates from street food samples A 100 µl of the ten-fold dilution of each sample homogenate was streaked on the following media: MacConkey agar, Eosin Methylene Blue (EMB)for isolation of enterobacteriaceae (positive and negative lactose fermenters), Mannitol Salt Agar (MSA) for isolation of Staphylococcus species and phenol red egg yolk polymyxin agar (PREP) for isolation of Bacillus spp (all media were purchased from OXOID, UK).The experiment was carried out as described elsewhere . K. Pneumoniae isolates were identified based on Gram’s staining, colony character and different array of standard conventional biochemical tests like Indole, Methyl red, Voges-proskauer, Citrate utilization, Triple sugar iron agar (TSI), Oxidase, Catalase, Capsule, Motility and Urease test . The antibiotic susceptibility testing was done and interpreted using standard Kirby-Bauer disk diffusion technique according to the Clinical and Laboratory Standards Institute (CLSI) . In this study, a panel of 16 different commercially available antibiotic disks (HiMedia, India) was used. Antibiotics used were cefoperazon (CEP-75 µg), amikacin (AK, 30 µg), amoxicillin/clavulanic acid (AMC, 20/10µg), ampicillin/sulbactam (SAM, 20 µg), cefoxitin (FOX, 30 µg), ceftriaxone (CRO, 30 µg), cefotaxime (CTX, 30 µg), cefuroxime (CXM, 30 µg), co-trimoxazole (COT, 25 µg), chloramphenicol (C, 30 µg), tobramycin (TOB, 15 µg), imipenem (IPM, 10 µg), meropenem (MEM, 10 µg), piperacillin-tazobactam (TPZ, 100/10µg), tetracycline (TE, 30 µg), norfloxacin (NOR, 10 µg). In addition, the minimum inhibitory concentrations (MICs) of imipenem was estimated by using broth microdilution technique according to CLSI and Escherichia coli ATCC 25,922 was used as a control. The detection of ESBLs was done using double disk synergy test (DDST) as previously reported . The isolates were swabbed onto Mueller Hinton Agar (MHA) and tested for antibiotic resistance to amoxicillin/clavulanic acid (20 µg/10µg), ceftazidime (30 µg/mL), and cefotaxime (30 µg/mL). Upon incubation at 37 °C for 18–24 h ESBLs production was detected by the formation of zone of inhibition around the cephalosporins that increases towards the amoxicillin/clavulanic acid (20 µg/10µg), resulting in synergy formation . The experiment was performed according to Khalil et al. . Approximately 50 µL of pure Triton X-100 (Sigma-Aldrich, St. Louis, MO, USA) was poured onto the center of MHA plate and immediately coated across the entire plate in 4 to 6 directions. Afterwards, for around 10 min, the plate was left undisturbed until the reagent was entirely absorbed. The test was carried out using meropenem disc (10 µg). Additionally, K. pneumoniae ATCC BAA-1705 and ATCC BAA-1706 strains were used as positive and negative controls, respectively. The biofilm formation ability of K. pneumoniae isolates was tested using the microtiter plate technique as previously described . Briefly,180 mL of Luria-Bertani (LB) broth containing 1% glucose and 20 mL of fresh bacterial culture were added to sterile 96-well flat-based microtiter plates. Sterile LB, supplemented with 1% glucose, was used as a negative control, while K. pneumoniae ATCC 13,883 was considered as a positive control. After incubation at 37ºC for 18 h, each well was successively rinsed with phosphate-buffered saline (PBS). Before staining with crystal violet (2%), wells were dried at 60 °C for 1 h. Subsequently, glacial acetic acid 33% (v/v) was used to solubilize the bound dye and the absorbance was estimated at 570 nm (OD570). The experiment was performed three times, and the average reading was considered . Based on the obtained ODs, strains were classified into four groups namely, strong, moderate, weak biofilm producers, and non- biofilm producers . It was reported that the cutoff OD (ODc) was described as the mean OD of the negative control + three standard deviations. The degree of the formed biofilm was reported as follows: strong biofilm formation (OD > 4×ODc), moderate biofilm formation (2×ODc < OD < 4×ODc), weak biofilm formation (ODc < OD < 2×ODc), and non- biofilm formation (OD < ODc) . Two multiplex PCR assays were used for detection of ESBLs genes: one multiplex assay comprised bla TEM / bla SHV / bla OXA− 1 and a second one comprised bla CTX−M (including phylogenetic groups 1, 2 and 9) . On the other hand, one uniplex PCR was used for detection of bla CTX - M−8/−25 . The genomic DNA was extracted as previously described . Amplification was carried out as follow: initial denaturation at 95 °C for 5 min; 30 cycles of denaturation at 94 °C for 30s, annealing at 56 °C for 30s, and extension at 72 °C for 1 min; and a final extension at 72 °C for 10 min. PCR amplicons were separated electrophoretically on 1.2% agarose gel with ethidium bromide dye and visualized under UV light. For quality control of PCR assay, known control organisms harboring bla CTX - M , bla TEM , and bla SHV were included as positive controls in each run . All primers used are listed in Table . K. pneumoniae Multiplex PCR was used for detection of nine virulence genes in K. pneumoniae which were the following, ( ybtS , mrkD , entB , rmpA , K2 , kfu , allS , iutA , and magA) . Table showed the primer sequence, annealing temperature, the product size, and the concentration of the used primers.The PCR reactions were performed as previously described by . Positive and negative controls were involved in all PCR assays. The amplicons were separated at 100 V for 2 h in a 1.2% (wt/vol) agarose gel containing ethidium bromide . Data (antibiotic resistance, biofilm-producers, resistance genes, and virulence genes) analysis was performed using the Statistical Package for the Social Sciences software version 22 (SPSS Inc., Chicago, IL, USA). Data were evaluated using chi square test to compare between more than two qualitative groups. All statistical tests were two-sided. Figures , , and were prepared using GraphPad Prism software 5.0. The significance of differences was determined at p ≤ 0.05. Enumeration of the total bacterial numbers by viable count Viable count experiment revealed that the bacterial numbers in the collected samples were ranging from 2 × 10 2 to 3 × 10 9 CFU/mL. Isolation and identification of different bacterial species from samples Ninety out of two hundred and forty two (90/242, 37.2%) food samples were Gram-negative lactose fermenting bacteria and (36/242, 14.9%) isolates were Gram-negative non-lactose fermenters. Among ninety Gram-negative positive lactose-fermenters, the most frequent detected pathogen was K. pneumoniae (77/90, 85.5%), followed by E. coli (13/90, 14.5%). For Gram- positive bacteria, Staphylococci spp were present in (19/242, 7.8%) of food samples and other Gram-positive bacterial spp ( Bacillus spp) were present in (77/242, 31.8%) of the samples. Additionally, it was found that (20/242, 8.3%) of the samples were mixed isolates (Gram-positive bacteria and non-lactose fermenter Gram-negative bacteria).The distribution of bacterial species among street food samples is shown in Fig. . Antimicrobial susceptibility testing All K. pneumoniae isolates were resistant to cefuroxime, cephradine77/77(100%) and 76/77(98.7%) of the isolates were resistant to amoxicillin/clavulanic acid. The resistance percentages were 53/77(68.8%), 47/77(61%), 44/77(57.1%), 43/77(55.8%), 40/77(51.9%), and 38/77(49.3%) for ampicillin/sulbactam, cefotaxime, ceftriaxone, cefoperazone, cefoxitin, meropenem, respectively. About 24/77(31.2%) of the isolates were resistant to imipenem while 34/77(44.2%) and 28/77(36.4%) of the isolates were susceptible to tobramycin and amikacin, respectively. Co-trimoxazole showed the highest activity as 67/77(87%) of the tested K.pneumoniae isolates were co-trimoxazole-susceptible.The sensitivity to chloramphenicol, norfloxacin, tetracycline and piperacillin-tazobactamwere 63/77(81.8%), 62/77(80.5%), 57/77(74%) and 39/77(50.6%), respectively. The results of antibiotic susceptibility tests are shown in Fig. . About 21/77(27.3%) of the isolates were resistant to three or more different classes of antibiotics (i.e., MDR). Phenotypic detection of ESBLs Sixty of the isolates ( n = 60/77, 77.9%) were positive for this test. Screening the ability of biofilm formation Forty one out of 77 isolates (53.2%) showed moderate biofilm formation while 31 of the isolates (40.2%) showed strong biofilm formation. While, only five isolates (6.5%) exhibited weak biofilm formation as shown in Table . Phenotypic detection of carbapenemases by THT Carbapenemases were detected phenotypically using THT. This showed that seven K. pneumoniae isolates (9%) of the isolates were carbapenemases producers as indicated by positive THT. Molecular detection of ESBLs-genes It was observed that at least one of these genes was present among K. pneumoniae isolates. Interestingly, SHV was the most prevalent gene, since it was identified in 71.4% of the isolates (55/77) while TEM and CTXM-2 were equally identified in 55.8% of the isolates (43/77). Similarly, CTXM-8/25, CTXM-9, and CTXM-1 were found in 38.9% (30/77), 33.76% (26/77), and 29.8% (23/77) of the isolates, respectively. Finally, bla OXA−1 was identified in 7.79% of the isolates (6/77) as shown in Fig. , while Fig. showed multiplex PCR for the detection of some of the studied ESBLs genes. The first lane represents the DNA marker (100 bp DNA ladder), NTC refer to negative control, and the PCR products were separated on a 1.2% agarose gel. Molecular size marker is shown as lane M. Product size were bla SHV : 713 bp, and bla TEM : 800 bp. Molecular detection of virulence genes Table illustrated the frequency of the detected nine virulence genes in K. pneumoniae isolates.The most prevalent genes were mrkD (92.2%) and K2 (63.3%) followed by kfu and ybtS ( 51.9%). Other virulence genes including entB , alls , and rmpA were detected in 49.3%, 36.3%, and 25.9% of the isolates, respectively. The least detected genes were iutA and magA as they were identified in 22.1% and 9.1% of the isolates, respectively. Additionally, it was found that the presence of mrkD (340 bp), K2 (531 bp), allS (764 bp), and iutA (920 bp) genes was significantly associated with strong biofilm producers in comparison to moderate and poor producers with p -value < 0.05 Table . In the current study, thirty different virulence profiles were detected among K. pneumoniae isolates as shown in table . It was found that among the most prevalent virulence profiles were the mrkD + K2 genes (59.7%) in K. pneumoniae isolates followed by mrkD + entB genes (46.7%) simultaneously. Moreover, data obtained revealed two profiles for K. pneumoniae isolates harbored seven virulence genes simultaneously as follows: ybtS + mrkD + entB + rmpA + K2 + Kfu + alls (3.9%) and mrkD + entB + rmpA + K2 + Kfu + allS + iutA (2.5%). The full detailed profiles demonstrating the coexistence of virulence associated genes among K. pneumoniae isolates are listed in Table . Viable count experiment revealed that the bacterial numbers in the collected samples were ranging from 2 × 10 2 to 3 × 10 9 CFU/mL. Ninety out of two hundred and forty two (90/242, 37.2%) food samples were Gram-negative lactose fermenting bacteria and (36/242, 14.9%) isolates were Gram-negative non-lactose fermenters. Among ninety Gram-negative positive lactose-fermenters, the most frequent detected pathogen was K. pneumoniae (77/90, 85.5%), followed by E. coli (13/90, 14.5%). For Gram- positive bacteria, Staphylococci spp were present in (19/242, 7.8%) of food samples and other Gram-positive bacterial spp ( Bacillus spp) were present in (77/242, 31.8%) of the samples. Additionally, it was found that (20/242, 8.3%) of the samples were mixed isolates (Gram-positive bacteria and non-lactose fermenter Gram-negative bacteria).The distribution of bacterial species among street food samples is shown in Fig. . All K. pneumoniae isolates were resistant to cefuroxime, cephradine77/77(100%) and 76/77(98.7%) of the isolates were resistant to amoxicillin/clavulanic acid. The resistance percentages were 53/77(68.8%), 47/77(61%), 44/77(57.1%), 43/77(55.8%), 40/77(51.9%), and 38/77(49.3%) for ampicillin/sulbactam, cefotaxime, ceftriaxone, cefoperazone, cefoxitin, meropenem, respectively. About 24/77(31.2%) of the isolates were resistant to imipenem while 34/77(44.2%) and 28/77(36.4%) of the isolates were susceptible to tobramycin and amikacin, respectively. Co-trimoxazole showed the highest activity as 67/77(87%) of the tested K.pneumoniae isolates were co-trimoxazole-susceptible.The sensitivity to chloramphenicol, norfloxacin, tetracycline and piperacillin-tazobactamwere 63/77(81.8%), 62/77(80.5%), 57/77(74%) and 39/77(50.6%), respectively. The results of antibiotic susceptibility tests are shown in Fig. . About 21/77(27.3%) of the isolates were resistant to three or more different classes of antibiotics (i.e., MDR). Sixty of the isolates ( n = 60/77, 77.9%) were positive for this test. Forty one out of 77 isolates (53.2%) showed moderate biofilm formation while 31 of the isolates (40.2%) showed strong biofilm formation. While, only five isolates (6.5%) exhibited weak biofilm formation as shown in Table . Carbapenemases were detected phenotypically using THT. This showed that seven K. pneumoniae isolates (9%) of the isolates were carbapenemases producers as indicated by positive THT. It was observed that at least one of these genes was present among K. pneumoniae isolates. Interestingly, SHV was the most prevalent gene, since it was identified in 71.4% of the isolates (55/77) while TEM and CTXM-2 were equally identified in 55.8% of the isolates (43/77). Similarly, CTXM-8/25, CTXM-9, and CTXM-1 were found in 38.9% (30/77), 33.76% (26/77), and 29.8% (23/77) of the isolates, respectively. Finally, bla OXA−1 was identified in 7.79% of the isolates (6/77) as shown in Fig. , while Fig. showed multiplex PCR for the detection of some of the studied ESBLs genes. The first lane represents the DNA marker (100 bp DNA ladder), NTC refer to negative control, and the PCR products were separated on a 1.2% agarose gel. Molecular size marker is shown as lane M. Product size were bla SHV : 713 bp, and bla TEM : 800 bp. Table illustrated the frequency of the detected nine virulence genes in K. pneumoniae isolates.The most prevalent genes were mrkD (92.2%) and K2 (63.3%) followed by kfu and ybtS ( 51.9%). Other virulence genes including entB , alls , and rmpA were detected in 49.3%, 36.3%, and 25.9% of the isolates, respectively. The least detected genes were iutA and magA as they were identified in 22.1% and 9.1% of the isolates, respectively. Additionally, it was found that the presence of mrkD (340 bp), K2 (531 bp), allS (764 bp), and iutA (920 bp) genes was significantly associated with strong biofilm producers in comparison to moderate and poor producers with p -value < 0.05 Table . In the current study, thirty different virulence profiles were detected among K. pneumoniae isolates as shown in table . It was found that among the most prevalent virulence profiles were the mrkD + K2 genes (59.7%) in K. pneumoniae isolates followed by mrkD + entB genes (46.7%) simultaneously. Moreover, data obtained revealed two profiles for K. pneumoniae isolates harbored seven virulence genes simultaneously as follows: ybtS + mrkD + entB + rmpA + K2 + Kfu + alls (3.9%) and mrkD + entB + rmpA + K2 + Kfu + allS + iutA (2.5%). The full detailed profiles demonstrating the coexistence of virulence associated genes among K. pneumoniae isolates are listed in Table . There are infrequent studies on microbiological investigations on ready to eat (RTE) salads with dressings/sauces, these studies were applied only to raw vegetables or salad blends . Additionally, the microbiological investigations were also applied to RTE products in general, since it was reported that there are many sources of microbial contaminations in RTE foods, among these sources the ingredients as well as the processing and handling could lead to cross contamination . The current study showed a significant increase in the microbial contamination among 242 RTE fresh salads where the total viable bacterial count in the collected samples were ranging from 2 × 10 2 to 3 × 10 9 CFU/mL.Our results are in agreement with the study from Bangladesh and Poland . There are several ways that RTESF can become contaminated. In the current this study, it was observed that these items are not heated and are only partially covered before being served. Additionally, street food vendors use their hands to serve food. Coins and tickets that are unclean and contaminated are taken or given by these same hands, the same observation was reported in Ghana . Additionally, the water often used by street foods vendors is extremely unclean and salads are frequently contaminated with pathogenic microorganisms due to mishandling of raw vegetables, either during salads preparation or from environment as soils typically harbor abundant microorganisms . These different practices may be induced the contamination of street foods. The lack of hygiene in the commercialization process of street foods leads to the microbial foodborne disease that can reach one or more people at a time. Recently it has been reported that K. pneumoniae is the main cause of foodborne outbreaks in different countries . The present study reported a high prevalence of K. pneumoniae (31.8%) in RTE fresh salads sampleswhich was higher than previously reported in Egypt . Our finding indicates that the prevalence of K. pneumoniae in RTESF is substantially increasing and food contamination with K. pneumoniae is common in Egypt. Data obtained were in line with many previous studies including those from India (27.12%) , Dominican Republic, India, Thailand, and Vietnam (43.3%) , Malaysia (32%) , Nigeria (20.24%) and Kenya (29%) . On contrary, our finding were different from those reported from China (9.9%) and Spain (5.6%) .With regard to other pathogens, E. coli was detected in low abundance (5.4%)in this study as compared to an Indian study (22.88%) also prevalence of Staphylococci species was in very low abundance when compared to a study from Thailand . Furthermore, the prevalence of Gram negative non-lactose fermenters was lower than that reported by a study from Ethiopia . The current study showed that almost all K. pneumoniae isolates were resistant to cefuroxime and amoxicillin/clavulanic acid and showed higher resistance to third-generation cephalosporins including cefotaxime (61%), cefoperazone (57.1%), and ceftriaxone (55.8%). Our data revealed that the most effective antibiotic was cotrimoxazole (87%) followed by chloramphenicol (81.8%) and norfloxacin (80.5%) which is in concordance with a study by Zhang et al. . Here, we reported that resistance ratios to imipenem, and meropenem were 31.2% and 49.4%, respectively which could be attributed, at least partly, to production of carbapenemases which is in line with Abdel-Rhman et al. .The resistance toward imipenem (31.2%) and meropenem (49.4%) should be considered seriously as these antibiotics are classified as lifesaving drugs which are used in treating serious infections . The high rates of antimicrobial resistance detected in this study could be attributed to the lack of strict policies that govern the use of antibiotics in Egypt.Various mechanisms are likely to be involved in such resistance including AmpC or ESBLs production with porin loss, carbapenemase production, and/or metallo-β-lactamase production . This study reported that 77.9% of K. pneumoniae were ESBLs-producers and this is in line with previous studies from Italy, Iran and South Korea with prevalence rates of 83.3%, 71.4% and 84.2% , respectively. It was reported that the percentages of ESBLs-producing K. Pneumoniae vary among countries with high percentages reported in Arabian countries . Moreover, studies from Egypt and other countries are scarce regarding the prevalence of ESBLs-producing K. pneumoniae from RTESF. Currently, most of the data focus on E. coli and other major foodborne pathogens from various sources, thereby lead to underestimating K. pneumoniae as a potential organism prevalent in RTESF, therefore, our finding highlights the importance to investigate ESBLs-producing K. pneumoniae in RTESF. All phenotypic methods, used in this study, to detect ESBLs and carbapenemases production were unable to differentiate between types or families of each class where all β-lactamase classes now present immediate clinical impact . All ESBLs genes ( bla CTX−M , bla SHV , and bla TEM ) tested in the current work were class A which are considered the most clinically significant ESBLs variants ß-lactamases are the primary cause of β-lactams resistance among Enterobacteriaceae. Here, the most predominant was bla SHV (71.4%) which is in line with previous reportsfrom Egypt and Iran . On contrary, our results are slightly different with the study of Iseppi et al. where the resistant genes mainly belonged to the CTX-M family . Additionally, our results are also different to the study of Maina et al. where TEM was the most prevalent (55%) gene . In the current study it was found that TEM and CTXM-2 were equally present in isolates which is similar to the study fromIran . Furthermore, CTXM-8/25, CTXM-9, and CTXM-1 were detected in 38.9%, 33.76%, and 29.8% of isolates, respectively while OXA-1 was identified in 7.79% of the isolates. The seresults are contrary to the study of Maina et al. where OXA-1 was present in 39% of the isolates . Furthermore, an Indian study reported high prevalence of TEM (40.68%), followed by CTX (32.20%) and SHV (10.17%) . Biofilm formation by K. pneumoniae is crucial in facilitating evasion of host defense mechanisms, communication between bacterial cells and protection against antibiotic action .In this study, 93.5% of the isolates were detected phenotypically as biofilm producers (53.2% of them were moderate and 40.3% were strong) and this is in line with a previous study from Egypt and mrkD was genotypically detected in 92.2% of the isolates. Here, we thought to investigate nine virulence genes of K. pneumoniae namely, ybtS , mrkD , entB , rmpA , K2 , kfu , alls , iutA and magA by PCR. This showed that K1 was detected in 9.1% of isolates similar to an Iranian study (11.2%) and lower than an Egyptian study (28.5%) . Likewise, K2 was detected in 63.6% of isolates which is higher than previous studies . Afterwards, siderophores were investigated and this showed that entB was detected in 49.4% of the isolates which is in agreement with an Egyptian study (68%) . However entB is only associated with virulence when it occurs in association with iutA or kfu . In the current study, 9(11.6%) of K. pneumoniae carried these three genes together. For entB in combination to iutA gene was found only in 11(14.3%) of the isolates while the co-existence of entB and kfu together was 24(31.2%) among the isolates. These result is lower than that of Naga et al. where the existence of the genes encoding entB in combination with iutA and kfu was found in 66% and 68%, respectively . Likewise, iutA was detected in 22.1% of the isolates which is in line with a previous study from Egypt (34%) . Additionally, there was a significant correlation between biofilm-production and iutA gene ( p -value = 0.003).Regarding ybtS , encodingYersiniabactin, it was detected in 51.9% of isolates compared to a Chinese study (95.9%) and Iranian study (39%) . For iron acquisition system, kfu was detected in 51.9%of isolates which is lower than an Egyptian study (100%) . The most prevalent virulence genes was mrkD (92.2%) which is in line with previous studies .As per rmpA , a putative virulence factor that has been found to be associated with highly virulent K. pneumoniae , a lower prevalence was (26%) detected compare with others (52%) . Here, coexistence of K1 and rmpA were only detected in 2.5% of the isolates, while 15.6% of the isolates coharbored rmpA and K2 . Data obtained revealed that allS , an activator of the allantoin regulon , was detected in 36.4% of isolates which is higher than reported elsewhere . It was observed that there was a significant correlation between mrkD and resistance to cefoxitinwith p -value < 0.05. Likewise entB showed a significant correlation with resistance to imipenem, and ceftriaxone. For K2 gene there was a significant correlation with resistance to cefoperazone, while Kfu showed a significant correlation with resistance to imipenem, and meropenem. Finally, for allS it showed significant correlation with resistance to cefoxitin, and imipenem and for iutA there was significant correlation with resistance to meropenem with p -value < 0.05 as shown in supplementary table- . Therefore, the presence of virulence factors and antibiotic resistance were, at least partly, directly correlated and this observation is in accordance to a previously published work from Egypt . In conclusion, to our knowledge the present study is considered the first report in Egypt that investigated RTE fresh green salad. We reported a highly resistant profile as well as a hyper virulent strain profile of K. pneumoniae isolates which were recovered from RTESF which represents a good reservoir of resistant K. pneumoniae isolates. This represents a major public health concern, therefore a restricted control of the emergence and the transmission of these isolates is needed. This can be attained by developing more prevention strategies on processing and handling this type of food. Below is the link to the electronic supplementary material. Supplementary Material 1 |
Use of qPCR to monitor 2,4-dinitroanisole degrading bacteria in water and soil slurry cultures | 29a3aef4-b0ad-476b-a44a-f106f5da5050 | 11631463 | Microbiology[mh] | The growing use of insensitive munitions compounds (IMCs), such as 2,4-dinitroanisole (DNAN) and 3-nitro-1,2,4-triazol-5-one (NTO), leads to environmental contamination at firing ranges and problems with disposal of waste streams at manufacturing facilities. IMCs were developed >40 years ago (Powell, ), but have been widely used only in the past decade. The concentrations of DNAN in manufacturers’ waste streams can range from ∼110 to 150 mg L −1 (Felt et al., ; Shen et al., ; Schroer et al., ; Hadnagy et al., ; Fawcett-Hirst et al., ). Environmental contamination of water and soils with these munitions compounds is problematic because of potential toxicity and mutagenicity to humans and other organisms (Liang et al., ; Madeira et al., ; Purohit and Basu, ; Menezes et al., ). A variety of biotic and abiotic DNAN transformations have been reported (Weidhaas et al., ; Niedźwiecka et al., ; Menezes et al., ; Wang et al., ), but ideal bioremediation or natural attenuation strategies should include DNAN biodegradation (mineralization) under environmentally-relevant conditions. Biodegradation of DNAN under aerobic conditions has been established in Nocardioides sp. strain JS1661 which was isolated from activated sludge at Holston Army Ammunition Plant (Fida et al., ). DNAN is biodegraded by strain JS1661 in minimal media, soil, and bioreactors (Karthikeyan & Spain, ; Menezes et al. ), and JS1661 grows with DNAN as the sole carbon source as previously described (Fida et al., ). Initial attack on DNAN by a demethylase enzyme produces 2,4-dinitrophenol which is then degraded via a series of reactions resulting in release of nitrite and accumulation of biomass (Fida et al., ). In mixed microbial communities such as those found in soil and bioreactors, nitrite is subsequently oxidized to nitrate by nitrifying bacteria (Karthikeyan & Spain, ; Menezes et al., ). Prediction and process monitoring during natural attenuation, bioremediation, and biotreatment require effective strategies for detecting the biodegrading microbes. The population density and growth must be determined to be consistent with the biodegradation rates (ASTM, ). Enumerating specific strains in microbial communities using 16S rRNA genes is limited because of variable number of 16S rRNA operons per cell (Zemb et al., ), highly similar 16S rRNA sequences, even within hypervariable regions (Chakravorty et al., ), the need for a third oligonucleotide, e.g., TaqMan probe (Chakravorty et al., ; Ritalahti et al., ) or a nested 16S PCR assay design which may limit the quantification range (Ritalahti & Löffler, ; Löffler et al., ; Marušincová et al., ). Possible molecular methods include metagenomics, amplicon sequencing for the 16S rRNA gene, or other methods targeting known functional genes for the biodegradation processes. Using metagenomics analysis of municipal anaerobic digester sludge, growth of dominant community members with known functional capability was observed simultaneously with the observed reduction of the NTO contaminant (Madeira et al., ). Similarly, myriad other studies evaluate full community structures of biodegradative communities, sometimes supplementing findings with genome sequencing of isolates or quantitative PCR (qPCR) for specific functional genes (Kharey et al., ; Chen et al., ; Richards et al., ; Dang et al., ). The drawback of 16S rRNA gene sequencing is that the catabolic pathways are not uniformly present in all members of the species, as recently described for metaldehyde biodegradation potential (Castro-Gutierrez et al., ). Currently, among the variety of molecular methods, qPCR is still considered the “gold standard” technique for tracking functional genes in biodegrading microorganisms in samples containing an environmental contaminant. Those using multiplexing or TaqMan probe designs for functional genes include biomarkers for 1,4-dioxane metabolism, dxmB (Miao et al., ), anaerobic toluene degradation, bssA (Pilloni et al., ), as well as multi-gene assay designs for RDX biodegradation (Wilson & Cupples, ; Collier et al., ). The use of qPCR provides an advantage in sensitivity, specificity, wide range of detection, and lower cost than with other molecular biological methods (ASTM, ). Specific qPCR assays for functional genes involved in a range of biodegradation pathways are commercially available (ASTM, ). Unfortunately, some require multiplexed qPCR for multiple targets (Inoue et al., ; Yang et al., ), degenerate bases in qPCR primers to target diverse functional gene sequences (Jin & Mattes, ), or the use of TaqMan probes in multiplexed qPCR assays, i.e., combining several single-target primer pairs (Collier et al., ; Dang & Cupples, ). Probe-based or multiplex qPCR assays are described for a wide variety of biodegradation functional genes, including for soil pesticide metaldehyde degradation (Castro-Gutierrez et al., ), for 1,4-dioxane-degrading bacteria (Inoue et al., ) and for soil 4-chlorobenzoate biodegradation potential (Rodrigues et al., ). Such multiplex qPCR combination or probe-based assay types are relatively expensive as compared to a SYBR Green-based assay as described in this report. The most economical assay for detecting the specific functional genes would involve a single set of non-degenerate qPCR primers with high specificity. Although complex strategies are required for distinguishing among closely-related strains, the approach can be more straightforward when there are unique or deeply-branching functional genes. The goal of this work was to develop a molecular method for enumeration of bacteria with the capacity for aerobic DNAN biodegradation. The candidate biomarker employed here, the dnhA–dnhB gene pair, is specific to the only known DNAN degrader, Nocardioides sp. strain JS1661 (Fida et al., ). Growth in Liquid Media Liquid cultures of JS1661 were grown in ½ strength minimal salts (½ MSB) media (Cohen-Bazire et al., ), pH 6.5, and supplemented with DNAN to final concentrations of approximately 400 µM as described previously (Fida et al., ; Karthikeyan & Spain, ). Cultures were grown at 30°C with shaking at 100 rpm in duplicate flasks. At appropriate intervals, samples were removed for DNA extraction, most probable number (MPN) analyses, nitrite, optical density (OD 600 ), and High Performance Liquid Chromatography (HPLC) analyses. Growth in Soil Slurry The soil slurry cultures of JS1661 contained low organic, dried sandy loam soil that was sieved (30–40 mesh) as described previously (Karthikeyan & Spain, ). Soil (10% w/v) was suspended in ½-strength MSB containing DNAN (∼350 µM) then inoculated, incubated, and analyzed as described above. Each soil slurry was 150 mL, composed of 15 grams soil, 10 mL of culture inoculum, and 125 mL of media. Suspensions were stirred on a magnetic stir plate during sampling with wide bore pipette tips. MPN From duplicate flasks, samples for MPN estimations were sonicated for 5 s and then subjected to serial 10-fold dilutions in ½ MSB pH 6.5 supplemented with DNAN (400 µM). Sixteen 200 µL samples from each dilution were added to 96 well round bottom microplates which were then sealed and incubated at 30°C. Each microplate thus constituted 2 sets of 8-tube MPN’s. After 10 days, growth was scored by accumulation of visible pellets in the bottom of the wells. The EPA MPN calculator using the Cornish & Fisher Limits, with a confidence interval of 95% (EPA, ) was used for estimation of viable cells per mL in the liquid culture or soil slurry experiments. DNA Extraction and Analysis For DNA extraction from liquid cultures and soil slurries, biomass was harvested by filtration and centrifugation, respectively. Samples from liquid cultures (10 mL) were filtered (Durapore ® GV 0.22 µm, 25 mm diameter, EMD Millipore, Burlington, MA, USA). Filters with biomass were each transferred to bead-beating tubes with 0.1 g each of 0.5 mm and 0.1 mm Zirconia/silica beads plus 500 µL of lysis buffer (20 mM Tris-HCl, pH 8, 2 mM EDTA, 200 mM NaCl, and 0.2% Triton X-100). Bacterial biomass from slurries was harvested by centrifugation (10 min, 20 800 g , 5°C), supernatants were discarded, and pellets were mixed with beads and lysis buffer. Harvested biomass in both procedures was stored at −20°C until DNA was extracted. A modified method of the Qiagen Gram-positive DNA extraction protocol (QIAamp ® DNA Mini, #51 306, Qiagen, Hilden, Germany) was used for all samples and the for the qPCR positive control culture. Modifications included enzymatic digestion with lysozyme (in addition to Proteinase K), bead beating, and RNA removal with RNAse A. Lysozyme was added to thawed samples (final concentration 2.8 mg mL −1 ), and incubated at 37°C for 30 min. Physical cell disruption was done for 2 × 1.0 min at a setting of 4.2 (Bead Ruptor Elite model, OMNI International, Kennesaw, GA, USA). RNA was removed with RNAse A (1.0 mg mL −1 ) at 24°C for 2 min, and then the rest of the manufacturer’s protocol steps were followed. Except for standard DNA isolated from pure Nocardioides sp. strain JS1661, DNA preparations were used undiluted in qPCR. Quantitative PCR Initial qPCR primer testing was with conditions as described below, followed by end-point analysis of amplicon purity and size via gel electrophoresis (2% agarose in 1X TAE, ). Additional amplicon validation was done on a preparation of end-point product subjected to electrophoresis and excision from the gel, then purification (QIAquick Gel Extraction, Qiagen, Hilden, Germany). The purified PCR product was subjected to Sanger sequencing in both directions (GeneWiz, Azenta Life Sciences, South Plainfield, NJ, USA). Primer pair DNAN-F6 and DNAN-R1 (Table ) was used in qPCR run in an Applied Biosystems™ QuantStudio™ 3 instrument (Thermo Fisher Scientific, Waltham, MA, USA). Initial 95°C incubation was for 1 min, followed by 40 cycles of 95°C (15 s), 60°C (60 s), and 72°C (20 s), with detection normalized to ROX during extension, and completed with a melt curve step. Melt curves were evaluated in each run, but amplicon size was also validated by electrophoresis (see above). Reactions were composed of 0.08 µM each forward and reverse primers, 1X Applied Biosystems™ PowerTrack™ SYBR Green master mix (Thermo Fisher Scientific, Waltham, Mass.), 2 µL of purified DNA sample, and water to achieve a 20 µL final volume. DNA for standard curves was isolated from Nocardioides sp. strain JS1661 pure cultures and used for calculations of absolute gene quantities in experimental samples . Standard DNA was assessed by UV spectrophotometry (NanoDrop TM ND-1000 V3.8.1, ThermoFisher Scientific, Waltham, MA, USA), and 10-fold serial dilutions in 10 mM Tris-1 mM EDTA (T 10 E 1 ) were used in qPCR. All qPCR measurements were done in triplicate. HPLC HPLC was used to determine DNAN and 2,4-DNP concentrations. Liquid or slurry samples were mixed with equal volumes of acetonitrile, vortexed, clarified by centrifugation, and analyzed by HPLC. Chromatography was done with an Agilent 1260 HPLC system using a Chromolith high resolution RP18 150–4.6 mm column. The mobile phase was 50% acetonitrile with 0.1% trifluoroacetic acid and 50% water with 0.1% trifluoroacetic acid. Absorbance of DNAN and 2,4-DNP were monitored at 300 nm with an Agilent Diode Array Detector (Model G4212B). Liquid cultures of JS1661 were grown in ½ strength minimal salts (½ MSB) media (Cohen-Bazire et al., ), pH 6.5, and supplemented with DNAN to final concentrations of approximately 400 µM as described previously (Fida et al., ; Karthikeyan & Spain, ). Cultures were grown at 30°C with shaking at 100 rpm in duplicate flasks. At appropriate intervals, samples were removed for DNA extraction, most probable number (MPN) analyses, nitrite, optical density (OD 600 ), and High Performance Liquid Chromatography (HPLC) analyses. The soil slurry cultures of JS1661 contained low organic, dried sandy loam soil that was sieved (30–40 mesh) as described previously (Karthikeyan & Spain, ). Soil (10% w/v) was suspended in ½-strength MSB containing DNAN (∼350 µM) then inoculated, incubated, and analyzed as described above. Each soil slurry was 150 mL, composed of 15 grams soil, 10 mL of culture inoculum, and 125 mL of media. Suspensions were stirred on a magnetic stir plate during sampling with wide bore pipette tips. From duplicate flasks, samples for MPN estimations were sonicated for 5 s and then subjected to serial 10-fold dilutions in ½ MSB pH 6.5 supplemented with DNAN (400 µM). Sixteen 200 µL samples from each dilution were added to 96 well round bottom microplates which were then sealed and incubated at 30°C. Each microplate thus constituted 2 sets of 8-tube MPN’s. After 10 days, growth was scored by accumulation of visible pellets in the bottom of the wells. The EPA MPN calculator using the Cornish & Fisher Limits, with a confidence interval of 95% (EPA, ) was used for estimation of viable cells per mL in the liquid culture or soil slurry experiments. For DNA extraction from liquid cultures and soil slurries, biomass was harvested by filtration and centrifugation, respectively. Samples from liquid cultures (10 mL) were filtered (Durapore ® GV 0.22 µm, 25 mm diameter, EMD Millipore, Burlington, MA, USA). Filters with biomass were each transferred to bead-beating tubes with 0.1 g each of 0.5 mm and 0.1 mm Zirconia/silica beads plus 500 µL of lysis buffer (20 mM Tris-HCl, pH 8, 2 mM EDTA, 200 mM NaCl, and 0.2% Triton X-100). Bacterial biomass from slurries was harvested by centrifugation (10 min, 20 800 g , 5°C), supernatants were discarded, and pellets were mixed with beads and lysis buffer. Harvested biomass in both procedures was stored at −20°C until DNA was extracted. A modified method of the Qiagen Gram-positive DNA extraction protocol (QIAamp ® DNA Mini, #51 306, Qiagen, Hilden, Germany) was used for all samples and the for the qPCR positive control culture. Modifications included enzymatic digestion with lysozyme (in addition to Proteinase K), bead beating, and RNA removal with RNAse A. Lysozyme was added to thawed samples (final concentration 2.8 mg mL −1 ), and incubated at 37°C for 30 min. Physical cell disruption was done for 2 × 1.0 min at a setting of 4.2 (Bead Ruptor Elite model, OMNI International, Kennesaw, GA, USA). RNA was removed with RNAse A (1.0 mg mL −1 ) at 24°C for 2 min, and then the rest of the manufacturer’s protocol steps were followed. Except for standard DNA isolated from pure Nocardioides sp. strain JS1661, DNA preparations were used undiluted in qPCR. Initial qPCR primer testing was with conditions as described below, followed by end-point analysis of amplicon purity and size via gel electrophoresis (2% agarose in 1X TAE, ). Additional amplicon validation was done on a preparation of end-point product subjected to electrophoresis and excision from the gel, then purification (QIAquick Gel Extraction, Qiagen, Hilden, Germany). The purified PCR product was subjected to Sanger sequencing in both directions (GeneWiz, Azenta Life Sciences, South Plainfield, NJ, USA). Primer pair DNAN-F6 and DNAN-R1 (Table ) was used in qPCR run in an Applied Biosystems™ QuantStudio™ 3 instrument (Thermo Fisher Scientific, Waltham, MA, USA). Initial 95°C incubation was for 1 min, followed by 40 cycles of 95°C (15 s), 60°C (60 s), and 72°C (20 s), with detection normalized to ROX during extension, and completed with a melt curve step. Melt curves were evaluated in each run, but amplicon size was also validated by electrophoresis (see above). Reactions were composed of 0.08 µM each forward and reverse primers, 1X Applied Biosystems™ PowerTrack™ SYBR Green master mix (Thermo Fisher Scientific, Waltham, Mass.), 2 µL of purified DNA sample, and water to achieve a 20 µL final volume. DNA for standard curves was isolated from Nocardioides sp. strain JS1661 pure cultures and used for calculations of absolute gene quantities in experimental samples . Standard DNA was assessed by UV spectrophotometry (NanoDrop TM ND-1000 V3.8.1, ThermoFisher Scientific, Waltham, MA, USA), and 10-fold serial dilutions in 10 mM Tris-1 mM EDTA (T 10 E 1 ) were used in qPCR. All qPCR measurements were done in triplicate. HPLC was used to determine DNAN and 2,4-DNP concentrations. Liquid or slurry samples were mixed with equal volumes of acetonitrile, vortexed, clarified by centrifugation, and analyzed by HPLC. Chromatography was done with an Agilent 1260 HPLC system using a Chromolith high resolution RP18 150–4.6 mm column. The mobile phase was 50% acetonitrile with 0.1% trifluoroacetic acid and 50% water with 0.1% trifluoroacetic acid. Absorbance of DNAN and 2,4-DNP were monitored at 300 nm with an Agilent Diode Array Detector (Model G4212B). DNAN (CAS # 119‐27‐7, purity = 98%), was from Alfa Aesar (Ward Hill, MA, USA), 2,4-DNP (CAS # 51‐28‐5, purity ≥ 98%) was from Sigma–Aldrich (Millipore-Sigma, St. Louis, MO, USA), and all other chemicals were reagent grade or better. Nitrite and OD 600 Assays Nitrite was measured as described previously (Daniels et al., ), using the Griess method and measurement of final absorbance at 543 nm. All absorbance measurements were done in 96 well plates in a microplate reader (Synergy HT, Biotek, Santa Clara, CA, USA). DNA Sequence Analyses Genome sequences were annotated and compared in RAST (Aziz et al., ). Additional manual curation to evaluate percent identity of amino acid sequences among homologs was performed with BLASTp or BLASTn analyses (Altschul et al., ). For primer design, nearest-neighbor melting temperatures (NN Tm) were evaluated as previously described (Kibbe, ). Sanger sequencing reads were edited in SnapGene Viewer, and the consensus sequence was formed in MEGA-X (Kumar et al., ). The resulting 174-nucleotide sequence was aligned with the previously-published dnhA-B gene sequences to validate the expected amplicon. JS1661 Draft Genome The draft genome of Nocardiodes sp. strain JS1661 was obtained by sequencing as previously described (Fida et al., ). Briefly, Illumina sequencing (HiSeq 2000) was followed by de novo read assembly (Tritt et al., ) and annotation with RAST (Aziz et al., ). A single copy of the dnhA-B genes is in the ∼5.8 Mbp draft genome. The 4 993 bp annotated contig containing the dnhA-B genes as shown in is publicly available (accession #KM213001.1), as previously described (Fida et al., ). 600 Assays Nitrite was measured as described previously (Daniels et al., ), using the Griess method and measurement of final absorbance at 543 nm. All absorbance measurements were done in 96 well plates in a microplate reader (Synergy HT, Biotek, Santa Clara, CA, USA). Genome sequences were annotated and compared in RAST (Aziz et al., ). Additional manual curation to evaluate percent identity of amino acid sequences among homologs was performed with BLASTp or BLASTn analyses (Altschul et al., ). For primer design, nearest-neighbor melting temperatures (NN Tm) were evaluated as previously described (Kibbe, ). Sanger sequencing reads were edited in SnapGene Viewer, and the consensus sequence was formed in MEGA-X (Kumar et al., ). The resulting 174-nucleotide sequence was aligned with the previously-published dnhA-B gene sequences to validate the expected amplicon. The draft genome of Nocardiodes sp. strain JS1661 was obtained by sequencing as previously described (Fida et al., ). Briefly, Illumina sequencing (HiSeq 2000) was followed by de novo read assembly (Tritt et al., ) and annotation with RAST (Aziz et al., ). A single copy of the dnhA-B genes is in the ∼5.8 Mbp draft genome. The 4 993 bp annotated contig containing the dnhA-B genes as shown in is publicly available (accession #KM213001.1), as previously described (Fida et al., ). dnhA-B qPCR Assay design, Sensitivity, and Specificity In Nocardiodes sp. strain JS1661, the DNAN demethylase is encoded by two genes whose open reading frames overlap by 3 nucleotides. Base positions # 984–987 (GTGA) of the 987-bp dnhA gene overlap with the first four bases of the 957-bp open reading frame for dnhB , starting with the alternative GTG codon ( , panel a). To date (August 2024), there are no functionally characterized protein sequences that are closely related to the DNHA and DNHB proteins . Most homologs have amino acid identity <60% and are open reading frames with unknown function from metagenome-assembly genome (MAG) segments. Two characterized proteins include the Zn-bound alkylsulfatases from Escherichia coli (Liang et al. ) and Pseudomonas sp. (Knaus et al., ), but percent identity is low, and only the central regions of each align with portions of the DNHA or DNHB sequences. The one cultured strain that contains homologs to dnhA-B in tandem is Nocardia testacea NBRC 100365 ( and ), but the Nocardia proteins have not been functionally characterized. The B homolog is annotated as a hypothetical protein, and the A homolog is a putative member of the metallo-beta-lactamase (MBL) fold metallo-hydrolase superfamily. Most of the JS1661 DNHA homologs are annotated as MBL fold hydrolases, containing the established conserved domain database sequences (Wang et al., ; NCBI CDD, MBL-fold, ). The DNAN hydrolase subunits DNHA and DNHB seem to constitute a deeply-branching subclass within the superfamily . The nucleotide sequence at the junction of JS1661 dnhA-B is unique. The region targeted by our qPCR assay has little to no nucleotide identity with the DNA sequences encoding the A and B homologs in N. testacea or any of the more distantly related homologs. In the regions where we designed qPCR primers, the homolog sequences’ identities are so low (<50% of 20 bases) that primer annealing would not occur under standard qPCR conditions. The 174-bp amplicon is a product including the end of dnhA and the start of dnhB (Table and , panel b). When primers were initially tested in PCR with DNA from a pure culture of strain JS1661, the resulting amplicon was of the anticipated length ( , panel a). Sequencing of the amplicon verified that it matched with 100% nucleotide identity to the dnhA-B junction (positions #1696–1869 of accession #KM213001.1). The assay gave reproducible detection down to ∼100 gene copies ( , panel b). Cycle threshold ( C T ) values ranged from ∼18 to 36, well below the 40 cycles used to assess amplicon size and purity described above. An attempt to detect ∼10.6 copies per reaction was also done (not shown), but a measurable C T value (39.24) was observed in only one of three wells. In 100% of the standard curve qPCR experiments performed, there were no “undetectable” wells in qPCR assays containing 106 gene copies. The calculated lower detection limit of the assay from replicate standard curves, including 24 wells containing the lowest concentration of the DNA standard, was 38.19 copies per reaction, with a confidence interval of 95%. In all experimental samples and standard curves, the upper limit assayed was ∼1.6 × 10 6 copies ( , panel b). We also evaluated assay precision for all experimental samples and standard DNA reactions, and replicate error was low in all cases. In all experimental DNA samples from liquid and soil slurry cultures, the assay coefficient of variation (CV) ranged from 0.1% to 11%, with an average experimental CV of 6%. In triplicate wells for each of the standard DNA concentrations, the CV was even lower, from 0.1% to 1.5%. Therefore, qPCR abundances for duplicate flasks are reported as a single average curve from both flasks, without including the well-to-well assay error. An inhibition assessment was tested on DNA extracted from all time points of a soil slurry culture ; the Ct values associated with undiluted DNA resulted in calculated gene copies of approximately ∼eight-fold higher values than those associated with qPCR on the same DNA that was diluted 10-fold. The qPCR enumeration method was validated with DNA from both liquid and soil slurry cultures supplemented with DNAN and inoculated with strain JS1661. Consistent with previous results (Fida et al., ; Karthikeyan & Spain, ), DNAN disappearance, accumulation of nitrite, and exponential growth of strain JS1661 in liquid cultures and soil slurries was rapid and reproducible. The DNAN degradation pathway (Fida et al., ) indicating the role of the dnhA-B gene products is provided in . Growth in Liquid Media When bacteria grew on DNAN in minimal media, OD 600 , MPN, and qPCR tracked growth well and were consistent with disappearance of DNAN and the transient intermediate, 2,4-DNP, along with accumulation of nitrite and biomass (Figs. and ). The qPCR assay indicated that strain JS1661 abundance increased ∼50-fold, from ∼1 × 10 6 cells mL −1 to ∼5 × 10 7 cells mL −1 in 18 hr (Fig. ). It should be noted that the absorbance growth curve (Fig. ) is an arithmetic plot, whereas the MPN and qPCR data are semilog plots. The population increase during the time course was consistent as measured by qPCR, absorbance, and MPN, but the cell abundance estimates from MPN were low compared to values obtained from qPCR. This is not unexpected, since there is a tendency of Nocardioides to grow in clumps or branching chains (Yoon & Park, ; Ma et al., ). The qPCR assay has advantages over traditional methods for estimating population growth. For example, abundances based on simple absorbance do not track well with other methods (e.g., CFU from dilution plating and flow cytometry), and abundance estimates can be off by as much as 10-fold depending on the cell concentration (Beal et al., ; Mira et al., ). In addition, MPN assays require the ability to grow bacteria from fresh samples, whereas qPCR can be applied with frozen samples. Finally, MPN is more tedious and is not easily integrated in workflows of commercial labs or in the field, which is why qPCR use is more widespread. Growth in Soil Slurries A similar set of biodegradation-dependent growth experiments evaluated the ability of the qPCR assay to quantify the JS1661 population during growth on DNAN in soil slurries (Figs. and ). Because biomass estimates based on OD 600 are not possible in soil slurries, the qPCR method was particularly important for population estimates. Consistent with the liquid cultures above, the qPCR-based abundance estimates mirrored those of the MPN data (Fig. ). Strain JS1661 abundance increased∼40-fold in 20 hr of growth in soil slurries, based on the qPCR data (Fig. ). Again, as in liquid culture experiments, MPN abundances in soil slurries (Fig. ) were lower than qPCR estimates, likely due to attachment of the cells to soil particles (Roser et al., ) or Nocardioides growth in clumps (Yoon & Park, ; Ma et al., ). Attachment and clumping can also explain the lower precision of MPN assays reflected in the broad confidence intervals . The data presented here, as well as in other reports (Liang et al., ; Kharey et al., ; Miao et al., ; Richards et al., ), add to the growing list of sensitive and specific qPCR assays for accurate measurement of bacterial growth during biodegradation of organic pollutants. However, there are currently no assays for bacteria that degrade insensitive munitions components. Because the genome size and number of copies of the functional gene target in the JS1661 genome are known, this qPCR method allowed for direct enumeration of the population. This is a particularly important consideration when evaluating abundance estimates with multiple methods, as was done here. Ideal qPCR assays should detect gene copies in a wide range of concentrations, which is not always possible, as recently reported for sulfate reduction pathway genes in sludge (Zambrano-Romero et al., ). In this report, the assay was tested on a wide range of copies, which is important for evaluating complex samples such as sludge, soil, and groundwater. Whereas the liquid culture tests reported here were done with pure cultures, the proof-of-concept soil slurry experiments contained naturally-occurring bacterial communities. If new DNAN degraders are isolated, their dnhA-B sequences can be examined, and if necessary, the qPCR primer sequences could be adjusted accordingly. Future studies should evaluate the method in a wide range of field conditions. qPCR Assay design, Sensitivity, and Specificity In Nocardiodes sp. strain JS1661, the DNAN demethylase is encoded by two genes whose open reading frames overlap by 3 nucleotides. Base positions # 984–987 (GTGA) of the 987-bp dnhA gene overlap with the first four bases of the 957-bp open reading frame for dnhB , starting with the alternative GTG codon ( , panel a). To date (August 2024), there are no functionally characterized protein sequences that are closely related to the DNHA and DNHB proteins . Most homologs have amino acid identity <60% and are open reading frames with unknown function from metagenome-assembly genome (MAG) segments. Two characterized proteins include the Zn-bound alkylsulfatases from Escherichia coli (Liang et al. ) and Pseudomonas sp. (Knaus et al., ), but percent identity is low, and only the central regions of each align with portions of the DNHA or DNHB sequences. The one cultured strain that contains homologs to dnhA-B in tandem is Nocardia testacea NBRC 100365 ( and ), but the Nocardia proteins have not been functionally characterized. The B homolog is annotated as a hypothetical protein, and the A homolog is a putative member of the metallo-beta-lactamase (MBL) fold metallo-hydrolase superfamily. Most of the JS1661 DNHA homologs are annotated as MBL fold hydrolases, containing the established conserved domain database sequences (Wang et al., ; NCBI CDD, MBL-fold, ). The DNAN hydrolase subunits DNHA and DNHB seem to constitute a deeply-branching subclass within the superfamily . The nucleotide sequence at the junction of JS1661 dnhA-B is unique. The region targeted by our qPCR assay has little to no nucleotide identity with the DNA sequences encoding the A and B homologs in N. testacea or any of the more distantly related homologs. In the regions where we designed qPCR primers, the homolog sequences’ identities are so low (<50% of 20 bases) that primer annealing would not occur under standard qPCR conditions. The 174-bp amplicon is a product including the end of dnhA and the start of dnhB (Table and , panel b). When primers were initially tested in PCR with DNA from a pure culture of strain JS1661, the resulting amplicon was of the anticipated length ( , panel a). Sequencing of the amplicon verified that it matched with 100% nucleotide identity to the dnhA-B junction (positions #1696–1869 of accession #KM213001.1). The assay gave reproducible detection down to ∼100 gene copies ( , panel b). Cycle threshold ( C T ) values ranged from ∼18 to 36, well below the 40 cycles used to assess amplicon size and purity described above. An attempt to detect ∼10.6 copies per reaction was also done (not shown), but a measurable C T value (39.24) was observed in only one of three wells. In 100% of the standard curve qPCR experiments performed, there were no “undetectable” wells in qPCR assays containing 106 gene copies. The calculated lower detection limit of the assay from replicate standard curves, including 24 wells containing the lowest concentration of the DNA standard, was 38.19 copies per reaction, with a confidence interval of 95%. In all experimental samples and standard curves, the upper limit assayed was ∼1.6 × 10 6 copies ( , panel b). We also evaluated assay precision for all experimental samples and standard DNA reactions, and replicate error was low in all cases. In all experimental DNA samples from liquid and soil slurry cultures, the assay coefficient of variation (CV) ranged from 0.1% to 11%, with an average experimental CV of 6%. In triplicate wells for each of the standard DNA concentrations, the CV was even lower, from 0.1% to 1.5%. Therefore, qPCR abundances for duplicate flasks are reported as a single average curve from both flasks, without including the well-to-well assay error. An inhibition assessment was tested on DNA extracted from all time points of a soil slurry culture ; the Ct values associated with undiluted DNA resulted in calculated gene copies of approximately ∼eight-fold higher values than those associated with qPCR on the same DNA that was diluted 10-fold. The qPCR enumeration method was validated with DNA from both liquid and soil slurry cultures supplemented with DNAN and inoculated with strain JS1661. Consistent with previous results (Fida et al., ; Karthikeyan & Spain, ), DNAN disappearance, accumulation of nitrite, and exponential growth of strain JS1661 in liquid cultures and soil slurries was rapid and reproducible. The DNAN degradation pathway (Fida et al., ) indicating the role of the dnhA-B gene products is provided in . When bacteria grew on DNAN in minimal media, OD 600 , MPN, and qPCR tracked growth well and were consistent with disappearance of DNAN and the transient intermediate, 2,4-DNP, along with accumulation of nitrite and biomass (Figs. and ). The qPCR assay indicated that strain JS1661 abundance increased ∼50-fold, from ∼1 × 10 6 cells mL −1 to ∼5 × 10 7 cells mL −1 in 18 hr (Fig. ). It should be noted that the absorbance growth curve (Fig. ) is an arithmetic plot, whereas the MPN and qPCR data are semilog plots. The population increase during the time course was consistent as measured by qPCR, absorbance, and MPN, but the cell abundance estimates from MPN were low compared to values obtained from qPCR. This is not unexpected, since there is a tendency of Nocardioides to grow in clumps or branching chains (Yoon & Park, ; Ma et al., ). The qPCR assay has advantages over traditional methods for estimating population growth. For example, abundances based on simple absorbance do not track well with other methods (e.g., CFU from dilution plating and flow cytometry), and abundance estimates can be off by as much as 10-fold depending on the cell concentration (Beal et al., ; Mira et al., ). In addition, MPN assays require the ability to grow bacteria from fresh samples, whereas qPCR can be applied with frozen samples. Finally, MPN is more tedious and is not easily integrated in workflows of commercial labs or in the field, which is why qPCR use is more widespread. A similar set of biodegradation-dependent growth experiments evaluated the ability of the qPCR assay to quantify the JS1661 population during growth on DNAN in soil slurries (Figs. and ). Because biomass estimates based on OD 600 are not possible in soil slurries, the qPCR method was particularly important for population estimates. Consistent with the liquid cultures above, the qPCR-based abundance estimates mirrored those of the MPN data (Fig. ). Strain JS1661 abundance increased∼40-fold in 20 hr of growth in soil slurries, based on the qPCR data (Fig. ). Again, as in liquid culture experiments, MPN abundances in soil slurries (Fig. ) were lower than qPCR estimates, likely due to attachment of the cells to soil particles (Roser et al., ) or Nocardioides growth in clumps (Yoon & Park, ; Ma et al., ). Attachment and clumping can also explain the lower precision of MPN assays reflected in the broad confidence intervals . The data presented here, as well as in other reports (Liang et al., ; Kharey et al., ; Miao et al., ; Richards et al., ), add to the growing list of sensitive and specific qPCR assays for accurate measurement of bacterial growth during biodegradation of organic pollutants. However, there are currently no assays for bacteria that degrade insensitive munitions components. Because the genome size and number of copies of the functional gene target in the JS1661 genome are known, this qPCR method allowed for direct enumeration of the population. This is a particularly important consideration when evaluating abundance estimates with multiple methods, as was done here. Ideal qPCR assays should detect gene copies in a wide range of concentrations, which is not always possible, as recently reported for sulfate reduction pathway genes in sludge (Zambrano-Romero et al., ). In this report, the assay was tested on a wide range of copies, which is important for evaluating complex samples such as sludge, soil, and groundwater. Whereas the liquid culture tests reported here were done with pure cultures, the proof-of-concept soil slurry experiments contained naturally-occurring bacterial communities. If new DNAN degraders are isolated, their dnhA-B sequences can be examined, and if necessary, the qPCR primer sequences could be adjusted accordingly. Future studies should evaluate the method in a wide range of field conditions. Many pathways for biodegradation of synthetic chemicals comprise enzymes that are closely related to those for natural compounds. Thus, they are widely distributed and often difficult to distinguish. In contrast, the genes encoding aerobic DNAN degradation offer a unique target for development of molecular tools to evaluate populations. Here, we report a clear relationship between biomarker gene abundances with DNAN-dependent growth of JS1661 under aerobic conditions. The amplicon is of ideal length for a simple qPCR approach applicable for both liquid and soil samples. The dnhA-B genes of strain JS1661 are unique, so this qPCR primer design allows for sensitive and specific detection of microbes known to be responsible for using a major IMC waste stream component as a growth substrate under aerobic conditions. kuae047_Supplemental_File |
Analysis of factors affecting the clinical management of infection in culture-negative patients following percutaneous endoscopic decompression: a retrospective study | 8da86ed1-3c61-42f7-934b-af93a2867f8c | 11808038 | Surgical Procedures, Operative[mh] | Introduction Lumbar disc herniation (LDH) and lumbar spinal stenosis (LSS) are the most common degenerative diseases of the lumbar spine . Due to structural changes, surgical intervention is often necessary for treatment. As surgical techniques and concepts have advanced, the approach to treating lumbar degenerative diseases has evolved from posterior lumbar interbody fusion (PLIF) to percutaneous endoscopic decompression (PED) . Compared to PLIF, PED offers several advantages, including shorter operation time, less trauma, faster recovery, and lower complication rates . Previous studies have shown that PED is less invasive than PLIF and has a lower incidence of surgical site infections , while achieving similar treatment outcomes. Some studies have indicated that operation time is a key risk factor for SSI, and that the use of percutaneous endoscopic techniques is a significant protective factor in preventing SSI, and currently, there is no evidence that different degenerative lumbar spine diseases affect the incidence of SSI . Additionally, large prospective studies have found that the likelihood of developing SSI after PED is approximately one-third of that following PLIF . Although the postoperative infection rate of PED is lower than that of PLIF, postoperative infection after PED cannot be completely avoided. What’s worse, routine preoperative antibiotic use makes it more likely to obtain negative bacterial cultures when SSI occurs after PED. Even if bacterial culture results are negative, symptoms such as changes in the patient’s condition, elevated infection markers, and imaging findings indicating tissue edema at the surgical site all suggest the presence of postoperative infection . Due to the use of preoperative prophylactic antibiotics, the likelihood of antibiotic-resistant bacteria emerging in postoperative SSI increases . Some studies have found that patients with culture-negative have lower infection markers, while a higher body temperature (>37.8°C) may be a favorable factor for positive culture results . In common pyogenic spinal infections, spinal surgeons often rely on experience to administer broad-spectrum antibiotics and determine whether the patient has an SSI, based on the effectiveness of antibiotic treatment and changes in laboratory indicators, as it is difficult to find other serological tests available to help identify the infecting pathogen . If the patient’s symptoms improve, infection markers decrease, body temperature normalizes, and imaging findings show gradual improvement after antibiotic treatment, this generally suggests that the infection diagnosis was correct . Conversely, it may indicate a transient inflammatory response due to aseptic inflammation. At this stage, the physician loses the ability to actively target the infection and must passively accept the treatment outcomes before deciding on the next course of action. In terms of treatment prognosis and effectiveness, after the appropriate use of antibiotics, there is no difference between patients with negative cultures and those with positive cultures . However, an antibiotic use duration shorter than 6-8 weeks may be a high-risk factor for infection recurrence . The risk of bacterial resistance associated with long-term antibiotic use increases accordingly. With the increasing risk of antibiotic resistance and the interference of negative bacterial cultures in diagnosis, the infection may progress silently, leading to a sudden worsening of clinical symptoms, and potentially resulting in shock or death . Therefore, when empirically using antibiotics in postoperative SSI patients following PED, shortening the treatment duration while controlling infection progression is a favorable factor in preventing adverse outcomes. In assessing the progression of infections, inflammatory markers serve as an important reference for clinical judgment. Previous studies have shown that patients with negative cultures may have lower inflammatory markers, and approximately 50% of infections caused by gram-negative bacteria or fungi can present with normal C-reactive protein (CRP) levels. Research has also demonstrated that the impact of different types of bacterial infections on inflammatory markers varies, which provides some basis for empirical treatment decisions. Furthermore, even in the absence of overt symptoms or elevated inflammatory markers, if there is strong suspicion based on radiological findings, it is recommended to administer prolonged, adequate anti-infective therapy . Specifically, in patients with postoperative spinal infections, some studies indicate that CRP may have higher sensitivity. Procalcitonin (PCT) can be mildly elevated in localized lumbar infections and significantly raised in systemic infections. PCT can be used as a reference marker for monitoring infection progression and treatment effectiveness . However, since postoperative SSI of the spine are often localized, the diagnostic value of CRP is higher than that of PCT in these cases. The erythrocyte sedimentation rate (ESR) has high sensitivity for diagnosing SSIs, but due to its low specificity, it is less useful for assessing the effectiveness of anti-infective treatment. White blood cell counts (WBCs) can serve as another indicator for determining the presence of infection . Previous studies have mostly focused on identifying the risk factors for infections following spinal endoscopic surgery. Previous studies have suggested that factors such as advanced age , male gender , obesity , and diabetes are risk factors for SSI. Through our research, we hoped to identify the factors that influence the progression and severity of infections in patients with culture-negative infections during actual clinical treatment. By intervening in these factors, we aimed to prevent the hidden development of severe infections, thereby ensuring that our clinical anti-infection treatments remain stable and controllable. Materials and methods 2.1 Patient population We conducted a review of the medical records of patients who underwent PED at the Affiliated Hospital of Qingdao University between January 2014 and June 2023, utilizing the Hospital Information System (HIS). In this study, SSI was defined according to the Centers for Disease Control and Prevention criteria . A superficial SSI is defined as an infection that affects only the skin or subcutaneous tissue and occurs within 30 days post-surgery. A deep SSI is characterized as an infection that occurs within 30 days of the surgery (if no implant was used) or within one year (if an implant was present). This infection is considered surgery-related and involves deep soft tissues. A deep SSI is further classified by the presence of one or more of the following criteria (1): Purulent drainage from the deep incision (2); A deep incision that spontaneously dehisces or is intentionally opened by the surgeon when the patient exhibits at least one of the following symptoms: fever (>38.0°C), localized pain, or tenderness, unless the site is culture-negative (3); An abscess or other signs of infection involving the deep incision, identified via direct examination, reoperation, or histopathologic or radiologic findings (4); A diagnosis of deep incisional SSI made by the surgeon or attending physician . Based on these criteria, we established the inclusion and exclusion criteria for this study as follows: Inclusion criteria (1): PED treatment was performed due to LDH and/or LSS. Ineffective conservative treatment for more than 6 months, requiring single-segment surgical intervention (2); Age > 18 years ; Preoperative prophylactic administration of 2g ceftriaxone via intravenous infusion, administered 2 hours before surgery (4); Three days after the operation, increase in body temperature(T>38.0°C), imaging findings suggestive of infection, laboratory test results indicate elevated infection markers (5); The results of repeated blood culture and tissue culture were negative. Exclusion criteria (1): History of previous lumbar spine surgery (2); Pre-existing lumbar spine tumors, lumbar instability, or lumbar infections prior to PED; (3) Autoimmune diseases or long-term use of glucocorticoid; (4) Severe osteoporosis and/or fractures; (5) Incomplete key information. Based on the inclusion and exclusion criteria, a total of 57 patients who developed infections with negative bacterial cultures after PED surgery were selected for this study. This was a retrospective study approved by the Medical Ethics Committee of the Affiliated Hospital of Qingdao University. Participants were not required to provide additional written informed consent. 2.2 Demographic and perioperative data collection This study retrospectively collected patient demographic data, surgical-related information, laboratory indicators, visual analogue scale (VAS) scores for pain, and imaging data related to the surgical site through the HIS. The included demographic data consisted of age, sex, height, weight, body mass index(BMI), preoperative blood glucose control (BGC), maximum temperature (MT) during the infection period, preoperative blood volume (PBV), total blood loss (TBL), volume of irrigation fluid used during endoscopic surgery, red blood cell (RBC) count in the irrigation fluid sample, surgery duration, visible blood loss (VBL), hidden blood loss (HBL), HBL index (HBLI), and duration of antibiotic treatment (DAT). Laboratory indicators included preoperative RBC count, preoperative hematocrit (Hctpre), postoperative hematocrit (Hctpost), and the levels of CRP, PCT, ESR, and WBCs. CRP, PCT, ESR and WBCs were recorded at four time points: at time point 1(T1)-the start of anti-infection treatment, at T2-the peak value, at T3- subsequent test near the peak (approximately 3-5 days after the peak), and at T4-the last inspection before discharge. VAS scores were also recorded at these four time points. Based on the literature review, we defined age, sex, BMI, HBL, MT during treatment, and BGC as risk factors. Additionally, to account for individual differences related to preoperative blood volume, height, and weight, we developed the HBLI, calculated using the following formula: HBLI = HBL/PBV. HBLI was risk factor, too. We defined CRP, PCT, WBCs, ESR, VAS scores, and the DAT as indicators for assessing the severity of infection and treatment progress status. 2.3 Perioperative patient management All surgeries were performed under general anesthesia by the same surgical team. The choice between percutaneous endoscopic interlaminar decompression (PEID) and percutaneous endoscopic transforaminal decompression (PETD) was made flexibly based on the patient’s specific condition. All patients received a prophylactic intravenous infusion of 2g ceftriaxone mixed with 100ml normal saline prior to surgery. According to “Expert Consensus on Perioperative Fluid Therapy for Surgical Patients (China, 2015),” to maintain a stable fluid balance, the total amount of intravenous fluids administered on the day of surgery was calculated at: 30mL/(kg·d) . Preoperative and postoperative blood routine tests were performed on the morning of the surgery and in the evening after the surgery, both on an empty stomach. 2.4 Calculation formula PBV was calculated according to the formula of Nadler.: PBV=k1×height(m 3 ) + k2×weight (kg) + k3 (for male: k1 = 0.3669, k2 = 0.03219, and k3 = 0.6041; for female: k1 = 0.3561, k2 = 0.03308, and k3 = 0.1833) . TBL was calculated by multiplying PBV by the change of Hct according to the Gross formula : TBL=PBV (Hctpre−Hctpost)/Hctave, Hctave is the average of Hctpre and Hctpost. VBL = RBC of irrigation fluid sample ×10000 ×dilution multiple×K×total volume of irrigation fluid ÷ (preoperative RBC count×10 9 ). Finally, HBL was calculated according to the formula of Sehat et al: HBL=TBL−VBL . After thoroughly mixing the irrigation fluid, a micropipette was used to withdraw a sample, which was then counted under a microscope using a hemocytometer . The dilution multiple and the constant K were assigned values (where K=1,16, or 25) according to actual counting methods and situation. 2.5 Statistical analysis Statistical analysis was conducted using SPSS 26.0 (IBM, USA). Continuous data were presented as the means ± standard deviations. The normality of continuous variables was assessed using the Shapiro−Wilk test. To avoid errors caused by repeated measurements, repeated measures ANOVA was used to compare the differences in CRP, PCT, ESR, WBCs, and VAS across four time points (T1, T2, T3, and T4). Mauchly’s Test was performed to assess the sphericity assumption, and when the assumption was violated, the Greenhouse-Geisser correction was applied to adjust the degrees of freedom. Bonferroni correction was used to control the false positive rate for multiple comparisons. For correlation screening, Spearman’s rank correlation test or Pearson’s correlation test was employed (0.8-1.0: very strong correlation, 0.6-0.8: strong correlation, 0.4-0.6: moderate correlation, 0.2-0.4: weak correlation, 0.0-0.2: very weak or no correlation.). Select indicators that can reflect the dynamics of treatment for multiple linear regression analysis (correlated at T2 and/or DAT-related, R > 0.2 and P < 0.05, with correlation weakening as treatment progresses). For categorical variables, they were transformed into dummy variables. For continuous variables, the standardized regression coefficients and their confidence intervals were calculated to reflect the actual impact on the dependent variable, and the standardized regression coefficients were used to explore the relative weights of the independent variables. To avoid multicollinearity in the multiple regression model, collinearity diagnostics were performed on all independent variables, and the variance inflation factor (VIF) was calculated. A VIF > 10 indicated strong multicollinearity between variables, and those variables were excluded from the multiple linear model for partial correlation analysis. The correlation heat map was drawn using ChiPlot ( https://www.chiplot.online/ ). A P value of less than 0.05 was considered statistically significant. Patient population We conducted a review of the medical records of patients who underwent PED at the Affiliated Hospital of Qingdao University between January 2014 and June 2023, utilizing the Hospital Information System (HIS). In this study, SSI was defined according to the Centers for Disease Control and Prevention criteria . A superficial SSI is defined as an infection that affects only the skin or subcutaneous tissue and occurs within 30 days post-surgery. A deep SSI is characterized as an infection that occurs within 30 days of the surgery (if no implant was used) or within one year (if an implant was present). This infection is considered surgery-related and involves deep soft tissues. A deep SSI is further classified by the presence of one or more of the following criteria (1): Purulent drainage from the deep incision (2); A deep incision that spontaneously dehisces or is intentionally opened by the surgeon when the patient exhibits at least one of the following symptoms: fever (>38.0°C), localized pain, or tenderness, unless the site is culture-negative (3); An abscess or other signs of infection involving the deep incision, identified via direct examination, reoperation, or histopathologic or radiologic findings (4); A diagnosis of deep incisional SSI made by the surgeon or attending physician . Based on these criteria, we established the inclusion and exclusion criteria for this study as follows: Inclusion criteria (1): PED treatment was performed due to LDH and/or LSS. Ineffective conservative treatment for more than 6 months, requiring single-segment surgical intervention (2); Age > 18 years ; Preoperative prophylactic administration of 2g ceftriaxone via intravenous infusion, administered 2 hours before surgery (4); Three days after the operation, increase in body temperature(T>38.0°C), imaging findings suggestive of infection, laboratory test results indicate elevated infection markers (5); The results of repeated blood culture and tissue culture were negative. Exclusion criteria (1): History of previous lumbar spine surgery (2); Pre-existing lumbar spine tumors, lumbar instability, or lumbar infections prior to PED; (3) Autoimmune diseases or long-term use of glucocorticoid; (4) Severe osteoporosis and/or fractures; (5) Incomplete key information. Based on the inclusion and exclusion criteria, a total of 57 patients who developed infections with negative bacterial cultures after PED surgery were selected for this study. This was a retrospective study approved by the Medical Ethics Committee of the Affiliated Hospital of Qingdao University. Participants were not required to provide additional written informed consent. Demographic and perioperative data collection This study retrospectively collected patient demographic data, surgical-related information, laboratory indicators, visual analogue scale (VAS) scores for pain, and imaging data related to the surgical site through the HIS. The included demographic data consisted of age, sex, height, weight, body mass index(BMI), preoperative blood glucose control (BGC), maximum temperature (MT) during the infection period, preoperative blood volume (PBV), total blood loss (TBL), volume of irrigation fluid used during endoscopic surgery, red blood cell (RBC) count in the irrigation fluid sample, surgery duration, visible blood loss (VBL), hidden blood loss (HBL), HBL index (HBLI), and duration of antibiotic treatment (DAT). Laboratory indicators included preoperative RBC count, preoperative hematocrit (Hctpre), postoperative hematocrit (Hctpost), and the levels of CRP, PCT, ESR, and WBCs. CRP, PCT, ESR and WBCs were recorded at four time points: at time point 1(T1)-the start of anti-infection treatment, at T2-the peak value, at T3- subsequent test near the peak (approximately 3-5 days after the peak), and at T4-the last inspection before discharge. VAS scores were also recorded at these four time points. Based on the literature review, we defined age, sex, BMI, HBL, MT during treatment, and BGC as risk factors. Additionally, to account for individual differences related to preoperative blood volume, height, and weight, we developed the HBLI, calculated using the following formula: HBLI = HBL/PBV. HBLI was risk factor, too. We defined CRP, PCT, WBCs, ESR, VAS scores, and the DAT as indicators for assessing the severity of infection and treatment progress status. Perioperative patient management All surgeries were performed under general anesthesia by the same surgical team. The choice between percutaneous endoscopic interlaminar decompression (PEID) and percutaneous endoscopic transforaminal decompression (PETD) was made flexibly based on the patient’s specific condition. All patients received a prophylactic intravenous infusion of 2g ceftriaxone mixed with 100ml normal saline prior to surgery. According to “Expert Consensus on Perioperative Fluid Therapy for Surgical Patients (China, 2015),” to maintain a stable fluid balance, the total amount of intravenous fluids administered on the day of surgery was calculated at: 30mL/(kg·d) . Preoperative and postoperative blood routine tests were performed on the morning of the surgery and in the evening after the surgery, both on an empty stomach. Calculation formula PBV was calculated according to the formula of Nadler.: PBV=k1×height(m 3 ) + k2×weight (kg) + k3 (for male: k1 = 0.3669, k2 = 0.03219, and k3 = 0.6041; for female: k1 = 0.3561, k2 = 0.03308, and k3 = 0.1833) . TBL was calculated by multiplying PBV by the change of Hct according to the Gross formula : TBL=PBV (Hctpre−Hctpost)/Hctave, Hctave is the average of Hctpre and Hctpost. VBL = RBC of irrigation fluid sample ×10000 ×dilution multiple×K×total volume of irrigation fluid ÷ (preoperative RBC count×10 9 ). Finally, HBL was calculated according to the formula of Sehat et al: HBL=TBL−VBL . After thoroughly mixing the irrigation fluid, a micropipette was used to withdraw a sample, which was then counted under a microscope using a hemocytometer . The dilution multiple and the constant K were assigned values (where K=1,16, or 25) according to actual counting methods and situation. Statistical analysis Statistical analysis was conducted using SPSS 26.0 (IBM, USA). Continuous data were presented as the means ± standard deviations. The normality of continuous variables was assessed using the Shapiro−Wilk test. To avoid errors caused by repeated measurements, repeated measures ANOVA was used to compare the differences in CRP, PCT, ESR, WBCs, and VAS across four time points (T1, T2, T3, and T4). Mauchly’s Test was performed to assess the sphericity assumption, and when the assumption was violated, the Greenhouse-Geisser correction was applied to adjust the degrees of freedom. Bonferroni correction was used to control the false positive rate for multiple comparisons. For correlation screening, Spearman’s rank correlation test or Pearson’s correlation test was employed (0.8-1.0: very strong correlation, 0.6-0.8: strong correlation, 0.4-0.6: moderate correlation, 0.2-0.4: weak correlation, 0.0-0.2: very weak or no correlation.). Select indicators that can reflect the dynamics of treatment for multiple linear regression analysis (correlated at T2 and/or DAT-related, R > 0.2 and P < 0.05, with correlation weakening as treatment progresses). For categorical variables, they were transformed into dummy variables. For continuous variables, the standardized regression coefficients and their confidence intervals were calculated to reflect the actual impact on the dependent variable, and the standardized regression coefficients were used to explore the relative weights of the independent variables. To avoid multicollinearity in the multiple regression model, collinearity diagnostics were performed on all independent variables, and the variance inflation factor (VIF) was calculated. A VIF > 10 indicated strong multicollinearity between variables, and those variables were excluded from the multiple linear model for partial correlation analysis. The correlation heat map was drawn using ChiPlot ( https://www.chiplot.online/ ). A P value of less than 0.05 was considered statistically significant. Results After conducting a search through the HIS, a total of 57 patients met the inclusion and exclusion criteria for this study. Normality tests revealed that the continuous variables followed a normal distribution. The study included 29 males and 28 females, with an average age of 59.72 ± 9.76 years, an average height of 163.86 ± 7.20 cm, an average weight of 67.68 ± 8.66 kg, and a BMI of 25.21 ± 2.78kg/m 2 . Detailed information was provided in . The results of statistical data of bleeding volume in endoscopic surgery were shown in . The DAT, VAS scores, laboratory indicators of CRP, PCT, ESR, and WBCs were shown in . The changes in CRP, PCT, ESR, WBCs, and VAS scores are shown in . The results indicated that the four time points showed statistically significant differences and could represent the severity of infection and treatment effectiveness at four different stages. In the correlation screening, sex, BMI, MT, BGC, HBL, and HBLI were all found to be correlated with DAT, and P < 0.05. Meanwhile, CRP at T2 and T3, PCT at T2, ESR from T1 to T4, and WBCs at T2 all showed good correlations with some of the independent variables. The specific results were shown in . The visualization results were shown in . Since CRP has a high diagnostic value for spinal infections and can reflect the effectiveness of short-term medication, and the highest value of PCT can reflect the severity and progression of infection, while DAT directly reflects the duration of the treatment, CRP at T2 and T3 time points, PCT at T2 time point, and DAT were selected as dependent variables. T2PCT represents the severity of infection, T3CRP reflects the effect of short-term antibiotic use, and DAT represents the duration of treatment . A collinearity diagnosis was performed for all factors. The VIF value of HBL was 25.00, and the VIF value of HBLI was 16.66, suggesting that HBL and HBLI are highly correlated with other independent variables. Therefore, sex, BMI, BGC, and MT were used for multiple linear regression analysis with the dependent variables. The specific results were shown in . A partial correlation analysis was performed between HBL, HBLI, and the dependent variables. The specific results were shown in . The results showed that BGC was strongly correlated with the severity of infection (Beta = 0.60, P = 0.00), strongly correlated with short-term treatment effectiveness (Beta = 0.65, P = 0.00), and moderately correlated with the DAT (Beta = 0.41, P = 0.01). HBL was moderately correlated with the severity of infection (Partial-R = 0.49, P = 0.00) and moderately correlated with the DAT (Partial-R = 0.48, P = 0.00). HBLI was moderately correlated with the DAT (Partial-R = 0.50, P = 0.00). Female was a favorable factor to shorten the DAT (Beta = -0.25, P = 0.01), and higher MT during infection may indicate a longer DAT (Beta = 0.28, P = 0.02). Discussion Although previous studies have generally confirmed that PED is a safe, effective, and minimally invasive procedure with a low incidence of complications, SSI in PED remain inevitable. Regarding the incidence of infection, Ogihara’s study indicated that the infection rate in PED is approximately one-third of that in traditional open surgery . Similarly, Hussein’s research reported comparable results . According to Watanabe’s findings, continuous irrigation during PED may help reduce bacterial colonization at the surgical site, thereby lowering the risk of infection . Owen’s study also suggested that smaller surgical incisions are associated with a reduced probability of SSI . Risk factors for spinal postoperative infections include advanced age, male gender, obesity, a history of lumbar spine surgery, malnutrition, diabetes, and long-term corticosteroid use. However, some studies have indicated that higher BMI may be associated with a lower incidence of PED-related infections and earlier relief of postoperative VAS scores . Nevertheless, the majority of studies suggested that obese patients are at higher risk for SSI, venous thromboembolism, longer operation times, and greater intraoperative blood loss. Our analysis showed a weak positive correlation between BMI and the DAT (R = 0.29, P < 0.05), suggesting that obesity may not play a predominant role in culture-negative postoperative infections. The increased subcutaneous fat in obese patients likely promotes the release of inflammatory cytokines, leading to insulin resistance and, consequently, elevated blood glucose levels, which may increase the risk of infection exacerbation. Thus, BMI did not show a strong positive correlation with the DAT. In contrast, our study demonstrated a strong correlation between BGC and the DAT (R = 0.72, P < 0.05), likely due to the fact that elevated blood glucose levels persistently increase the risk of infection exacerbation, prolonging the need for antibiotic treatment. It should be noted that blood glucose levels may not directly affect the efficacy of antibiotics in treating infections, a relationship that warrants further investigation in future studies. The cause of postoperative infections following PED remains unclear. However, two main theories are widely accepted in clinical practice. The first suggests that existing bacterial spread, such as from skin abscesses, endocarditis, or pharyngitis, could be the source of the infection. The second theory proposes that invasive procedures, such as surgery, trauma, or lumbar puncture, may directly introduce bacteria into the infection site . The prophylactic use of antibiotics is currently the primary reason for preventing culture-negative infections. Given spinal infections destroy lumbar, nerve roots and the dural sac, SSI in the spine often presents with significant localized pain and radiating lower limb nerve pain . Therefore, pain is often one of the earliest clinical signs of spinal postoperative infection. Whether localized to the surgical site or radiating along nerve pathways, pain should be given careful attention. However, like sterile inflammation, the early stages of infection may present with subtle systemic symptoms, causing localized pain to be mistakenly diagnosed as normal postoperative neuropathic pain. Since culture-negative infections prevent targeted antibiotic treatment, broad-spectrum antibiotics are often used empirically. Additionally, the presence of postoperative fever further complicates the accurate diagnosis of culture-negative infections . Therefore, the aim of this study was to conduct a comprehensive analysis of the clinical baseline characteristics, infection-related laboratory indicators, and imaging findings of patients, in order to identify the factors that significantly impacted the progression of anti-infection treatment. By addressing these risk factors, severe clinical infections could be prevented. The study hoped to provide valuable insights for the clinical management of patients with culture-negative infections following PED. HBL, first identified by Pattison in 1973, is now defined as the extravasation of blood into tissue spaces and/or joint cavities, as well as the loss of hemoglobin due to hemolysis. Compared with VBL, HBL during PED accounts for the vast majority. In our study, HBL showed strong correlations with CRP, PCT, ESR, and VAS scores. This may be due to the fact that increased HBL leads to elevated local pressure, thereby intensifying pain symptoms. Accumulated HBL might also lead to local hemolysis, which could trigger the release of inflammatory mediators, resulting in neuroinflammation and intense neuropathic pain. Additionally, the accumulation of significant bleeding may result in the formation of local blood clots, which create an environment conducive to the buildup of inflammatory factors and bacterial colonization. Therefore, HBL serves as a significant risk factor for early increases in CRP, PCT, and ESR. In the later stages of infection treatment, as antibiotics take effect and local HBL is absorbed, HBL is no longer a major risk factor influencing the outcome of anti-infection therapy. Our findings supported these inference. Hence, controlling HBL is crucial for mitigating early infection progression. Previous studies have indicated that factors such as anesthesia methods, intraoperative medications, perioperative anticoagulants, blood pressure regulation during surgery, and even gastrointestinal ulcers can influence the volume of HBL . To ensure data accuracy, we introduced the HBLI. HBL and HBLI were similar in many characteristics. It further confirmed the certainty of the influence of HBL management during operation on the progress of postoperative infection. In addition, our study found that gender is also a factor that affects the severity of infection and the therapeutic effect (Beta = -0.25, P < 0.05). Sex is a classified variable, and it is converted into a dummy variable, which is defined as “male = 0, female = 1” for the convenience of model calculation. Under this coding mode, the gender regression coefficient in the model indicates the influence of the mean difference between men and women on the dependent variable. Specific to the results of this study, women were the protective factors after infection, which was the same as the previous research conclusions . Limitations: This study has several limitations: 1. As a retrospective study, the evidence is relatively weak, and prospective studies are needed to confirm the reliability of the conclusions; 2. Due to the nature of this study, it was not possible to establish an appropriate control group. Patients with negative cultures and those receiving targeted antibiotic treatment for positive cultures, as well as patients with aseptic inflammation who do not require antibiotics, could not form a convincing control group; 3. One limitation of our study was the lack of a suitable reference standard for grading BGC, which was instead based on clinical experience. This reliance on subjective grading may have reduced the robustness and generalizability of our findings. Further studies were needed to validate our conclusions; 4. Additionally, we did not account for the patients’ primary infectious conditions, such as chronic pneumonia and rhinitis, etc., which could have potentially influenced the effectiveness of the treatment and introduced bias into our results. Future research with larger sample sizes and multi-centre data analysis is needed to strengthen the validity of the study’s methodology. Conclusion To the best of our knowledge, this is the first study in the field of spine surgery to analyze the risk factors of antibiotic treatment in patients with culture-negative infections following endoscopic surgery. Our findings suggest that healthy blood glucose levels, a lower HBL and HBLI might help reduce the duration of antibiotic use after infection. Effective hemostasis during surgery to reduce HBL and good preoperative BGC indicators are both beneficial measures for infection treatment. |
Sex estimation using skull silhouette images from postmortem computed tomography by deep learning | c25bd631-2d38-40ed-a3b6-a270b7ffd5f6 | 11442976 | Forensic Medicine[mh] | In August 2023, wildfires in Maui killed numerous victims. Individual identification during the disaster was difficult because many bodies were severely damaged, and the disaster scene was extremely large and complex . Hence, it is important to investigate suitable approaches for personal identification in large-scale disasters. In 2007, Rutty et al. reported the use of mobile postmortem computed tomography (PMCT) for mass fatality incidents . Subsequently, PMCT was recognized as being a useful tool for disaster victim identification (DVI) . PMCT is a noninvasive technique to quickly obtain whole-body information. Furthermore, image data can be stored semi-permanently and shared worldwide . According to “The Program on Promotion of Policy about Death Investigation” enacted in Japan, postmortem imaging is a suitable method for performing scientific investigations concerning personal identification. This law also promotes the development of databases for personal identification . PMCT has been extensively studied for various aspects of forensic investigation, including the determination of cause of death and personal identification – . These studies underscore the importance of integrating advanced imaging techniques like PMCT into forensic practice to enhance the accuracy and efficiency of investigations. While the soft tissues of the human body decay over time after death, the bones undergo minimal change; therefore, useful information for personal identification is retained. In particular, the morphological information of the bone can be used for sex estimation, which is important for personal identification. Male bones are generally bigger, thicker, and more uneven than those of females from the same age group . The skull and pelvis have particularly obvious sex-related differences in shape. In 2008, Walker obtained 88% accuracy of sex estimation using a discriminant function from skull shapes assessed visually . In 2022, Garcovich et al. obtained 88.7% accuracy in males and 90.7% accuracy in females by performing geometric morphometric analysis of frontal bone landmarks in head X-ray images . Although these studies have yielded valuable results, they suffer from a reproducibility crisis because of the visual assessment of the skull shape or manual setting of landmarks. During DVI among a large number of corpses, it is necessary to achieve rapid and accurate personal identification. To make rapid and accurate sex estimation possible, it is important to investigate the possibility of using artificial intelligence (AI). The applicability of deep-learning methods, such as convolutional neural networks (CNNs), has been explored in various fields. In forensic anthropology and forensic pathology, CNNs are used to solve certain problems , such as sex and age estimation – , recognition of head measurement landmarks , and segmentation of head structures . In a study on sex estimation using deep learning, Cao et al. achieved more than 90% accuracy using pelvic CT images . In 2019, Bewes et al. reported 95% accuracy using three-dimensional (3D) skull reconstruction image from antemortem CT . In addition to sex estimation with antemortem CT, sex estimation with PMCT or X-ray imaging must be considered to develop a practical method that can be used worldwide . Recent advancements have proposed new sex estimation methods that combine the advantages of visual observation and the accuracy of measurement-based approaches. For example, Nikita and Michopoulou (2019) proposed a method that quantifies the shape of specific cranial regions to achieve a 94% accuracy in sex classification , . Yang (2019) achieved 96.7% accuracy by machine learning algorithms to 3D cranial models . However, these various prior studies still require a manual process to identify key features or provide 3D data, which limits the automation and convenience of sex estimation. In this study, two-dimensional (2D) silhouette images were obtained (hereafter referred to as silhouette images) from head PMCT only showing the skull outline and used for sex estimation in personal identification employing trained deep-learning models. Silhouette images can be acquired from both PMCT images and X-ray imaging, which is commonly used worldwide. To the best of our knowledge, this is the first study to report sex estimation using silhouette images. Database This study included 264 cases (132 of each sex) from 337 head PMCT cases (203 males and 134 females) who underwent PMCT at autopsy. Patients with bone defects located on the outline of the skull in the silhouette image were excluded. Bone defects inside the skull outline can be covered by filling inside the skull outline using a labeling process, which is described later, and these cases were included. In addition, cases wherein the parietal lobe was not within the imaging range and children (under 18) were excluded. Furthermore, we randomly excluded some PMCT cases to obtain the same number of male and female images to avoid biasing the results of sex estimation using deep-learning models. The patients were aged 18–97 years old, with an average age of 58.5 years. Bodies were placed in bags without positioning. The study was approved by the Institutional Review Board of Kyushu University, Japan (approval number #2017-27-285). All methods were performed in accordance with relevant guidelines and regulations. Due to the retrospective nature of the study, Kyushu University Institutional Review Board waived the need of obtaining informed consent. PMCT images were obtained using a 16-row multidetector CT scanner (ECLOS, Hitachi Medical Corporation, Tokyo, Japan) in 2014–2019 at Department of Forensic Pathology and Sciences, Kyushu University (Table ), and were anonymized by removing patient IDs, names, and other personal information before usage. All image data were corrected using the skull angle correction developed by Kawazoe et al . so that the skull had the same position as that in the antemortem CT imaging. For the application of deep learning, we applied a four-fold cross-validation, and the 132 cases of each sex were divided into 99 training images (75%) and 33 test images (25%). Of the 99 training images, 29 were used as validation images. This study included 264 cases (132 of each sex) from 337 head PMCT cases (203 males and 134 females) who underwent PMCT at autopsy. Patients with bone defects located on the outline of the skull in the silhouette image were excluded. Bone defects inside the skull outline can be covered by filling inside the skull outline using a labeling process, which is described later, and these cases were included. In addition, cases wherein the parietal lobe was not within the imaging range and children (under 18) were excluded. Furthermore, we randomly excluded some PMCT cases to obtain the same number of male and female images to avoid biasing the results of sex estimation using deep-learning models. The patients were aged 18–97 years old, with an average age of 58.5 years. Bodies were placed in bags without positioning. The study was approved by the Institutional Review Board of Kyushu University, Japan (approval number #2017-27-285). All methods were performed in accordance with relevant guidelines and regulations. Due to the retrospective nature of the study, Kyushu University Institutional Review Board waived the need of obtaining informed consent. PMCT images were obtained using a 16-row multidetector CT scanner (ECLOS, Hitachi Medical Corporation, Tokyo, Japan) in 2014–2019 at Department of Forensic Pathology and Sciences, Kyushu University (Table ), and were anonymized by removing patient IDs, names, and other personal information before usage. All image data were corrected using the skull angle correction developed by Kawazoe et al . so that the skull had the same position as that in the antemortem CT imaging. For the application of deep learning, we applied a four-fold cross-validation, and the 132 cases of each sex were divided into 99 training images (75%) and 33 test images (25%). Of the 99 training images, 29 were used as validation images. Figure shows the procedure for creating silhouette images from head PMCT data. First, the patient table and body bag were removed from the PMCT volume data, which had been positioning-corrected using a semi-automatic method for readjusting the head position as proposed by Kawazoe et al . The soft tissues were removed via threshold processing with 300 Hounsfield units [HU], leaving only the bone. Subsequently, 2D images were created using maximum intensity projection by varying the yaw angles in seven different directions (0° (frontal view), 15°, 30°, 45°, 60°, 75°, and 90° (lateral view)). The seven directions were chosen to capture a comprehensive range of anatomical features: for example, the frontal view reveals the zygomatic bone, while the lateral view highlights the superciliary arch and the external occipital protuberance (EOP). By including intermediate angles, we aimed to ensure that the silhouette images represent various aspects of the skull’s morphology. The 2D images were converted into silhouette images using binarization processing. The caudal signal from the cervical vertebrae was removed to easily learn only the morphological information of the skull. In certain lateral cases, observing the mastoid is challenging owing to the overlap of the mandible and cervical vertebrae. To address this issue, the cutting position of the lowest skull coordinates in the lateral images was set to 15 slices above that for the other six directions. Subsequently, Noise removal was achieved by first using the contour detection functions from the Python library OpenCV to automatically identify the largest contour corresponding to the skull. Then, we filled the inside of the detected contour to remove internal noise and subsequently removed any remaining fine noise outside the skull, addressing background noise effectively. Finally, silhouette images for deep learning were created by cropping the image size to 256 × 256 pixels and centering them on the skull (Fig. ). A Python program was used for image processing of creating silhouette images. . We implemented data augmentation with horizontal flip and ± 5° rotation to increase the number of training and validation images four times (70 × 4 training and 29 × 4 validation images for each sex (560 training and 232 validation images)). The nearest neighbor method was selected as the interpolation method for image rotation. In addition, silhouette images were converted into red-green-blue images to apply transfer learning using ImageNet . This will be discussed later in this section. AlexNet and VGG16 were used for sex estimation. All deep-learning models adopt transfer learning using the parameters learned by ImageNet, which is a large database with over 10 million color images. During training, the batch size was 64, the optimization algorithm was Adam, the activation function was SoftMax, and the loss function was cross-entropy. The learning rates were 1e-5 for AlexNet and 2e-6 for VGG16. We applied an early stopping function that stopped the training when the validation loss was minimized. Deep learning was performed using Python with an Intel Xeon Silver 4110 central processing unit, 32 GB of memory, and an NVIDIA TITAN RTX graphics processing unit. The test results for sex estimation were evaluated for accuracy. Accuracy was defined as the percentage of images where the sex was correctly identified among 264 images (four times 33 test images for each sex) and was calculated for each deep-learning model and each projection angle. In addition, a majority vote based on the test results of multiple viewing angles, hereafter referred to as the majority vote, was conducted. A case identified correctly in more than half of the projection angles was regarded as true. In addition, we evaluated the performance of each model by calculating Matthews Correlation Coefficient (MCC). We computed precision, recall, and F1 score for each sex class (male and female) to provide a more comprehensive understanding of the models’ classification performance. To evaluate the trained deep-learning models, we created a receiver operating characteristic (ROC) curve by plotting the sensitivity on the vertical axis and 1 − specificity on the horizontal axis. Sensitivity was defined as the ratio of male images correctly identified as males, and specificity was defined as the ratio of female images correctly identified as females. We calculated the area under the ROC curve (AUC) as a quantitative index. Furthermore, gradient-weighted class activation mapping (Grad-CAM) , which is among the most commonly employed methods for explainable AI, was used to investigate which parts of the skull were focused on by the deep-learning models. We compared the Grad-CAM heatmap to areas with obvious sex differences (superciliary arch, forehead, parietal, and external occipital protuberance (EOP)) described in forensic anthropology and forensic pathology . Performance of deep-learning models Table presents the average values of the training accuracy, training loss, validation accuracy, and validation loss in the seven directions for each deep-learning model. VGG16 achieved the highest value for validation accuracy and validation loss. Both models exhibited the highest validation accuracy in the lateral projection: AlexNet = 88.6% and VGG16 = 90.1%. Tables and present the values of precision, recall, accuracy, and MCC for sex estimation across seven angles for different deep-learning models. For AlexNet, the highest accuracy was achieved at the 90° projection angle, with an accuracy of 0.886. This angle also yielded the highest MCC value of 0.774, indicating a strong overall performance. The precision and recall values at this angle were 0.911 and 0.856 for males, and 0.864 and 0.917 for females, respectively. Similarly, for the VGG16 model, the lateral view achieved the highest accuracy of 0.898 and the highest MCC of 0.796. At this angle, the precision and recall were 0.913 and 0.879 for males, and 0.883 and 0.917 for females, respectively. Furthermore, Fig. shows that the ROC curve from the lateral view achieved the highest accuracy among all projection angles for each deep-learning model. The AUC values for AlexNet and VGG16 were 0.951 and 0.957, respectively. Notably, VGG16 consistently demonstrated higher precision, recall, and MCC compared to AlexNet in most angles, particularly excelling in the lateral view. These results suggest that VGG16 is generally more effective and reliable for sex estimation across various angles. Table presents the results of the majority vote in three directions (0°, 45°, and 90°) and seven directions (0–90°). The majority vote in three directions exhibited slightly higher accuracy in AlexNet (89.4%) and VGG16 (91.7%) than in only one direction, as shown in Tables and . Table presents the average values of the training accuracy, training loss, validation accuracy, and validation loss in the seven directions for each deep-learning model. VGG16 achieved the highest value for validation accuracy and validation loss. Both models exhibited the highest validation accuracy in the lateral projection: AlexNet = 88.6% and VGG16 = 90.1%. Tables and present the values of precision, recall, accuracy, and MCC for sex estimation across seven angles for different deep-learning models. For AlexNet, the highest accuracy was achieved at the 90° projection angle, with an accuracy of 0.886. This angle also yielded the highest MCC value of 0.774, indicating a strong overall performance. The precision and recall values at this angle were 0.911 and 0.856 for males, and 0.864 and 0.917 for females, respectively. Similarly, for the VGG16 model, the lateral view achieved the highest accuracy of 0.898 and the highest MCC of 0.796. At this angle, the precision and recall were 0.913 and 0.879 for males, and 0.883 and 0.917 for females, respectively. Furthermore, Fig. shows that the ROC curve from the lateral view achieved the highest accuracy among all projection angles for each deep-learning model. The AUC values for AlexNet and VGG16 were 0.951 and 0.957, respectively. Notably, VGG16 consistently demonstrated higher precision, recall, and MCC compared to AlexNet in most angles, particularly excelling in the lateral view. These results suggest that VGG16 is generally more effective and reliable for sex estimation across various angles. Table presents the results of the majority vote in three directions (0°, 45°, and 90°) and seven directions (0–90°). The majority vote in three directions exhibited slightly higher accuracy in AlexNet (89.4%) and VGG16 (91.7%) than in only one direction, as shown in Tables and . The heatmap outputs obtained using Grad-CAM are shown in Fig. . The model paid more attention to the areas shown in warm colors than those in cold colors. The deep-learning models identified specific anatomical features of the skull as most influential in determining sex. In males, the model primarily focused on the superciliary arch, particularly in the lateral (90 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:^\circ\:$$\end{document} ) view, while in females, the emphasis was on the EOP in the lateral view and the zygomatic bones in the frontal (0 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:^\circ\:$$\end{document} ) view. Figure shows that VGG16 consistently identified key anatomical features associated with sexual dimorphism. In female cases, the zygomatic bone was prominently recognized at 0°, while both the superciliary arch and the external occipital protuberance (EOP) were distinctly highlighted at 90°. AlexNet also recognized these features but with less precision, often focusing more broadly on internal skull regions and background areas. The agreement between the focus of the Grad-CAM and areas with obvious sex differences in forensic anthropology was investigated. Here, 33 test cases for each sex, whose accuracies were the closest to the overall accuracy in the four-fold cross-validation, were analyzed in lateral images of VGG16, which showed the highest accuracy (Table ). It was found that 93% of the images (27/29) correctly estimated as males were focused on the superciliary arch and all images (31/31) correctly estimated as females were focused on the EOP. In this study, we created skull silhouette images from head PMCT and conducted sex estimations using two deep-learning models. Sex estimation using silhouette images achieved an accuracy of 89.8% (Table ). In addition, the majority vote in the three directions improved the accuracy to 91.7%. This performance is comparable to that of conventional sex estimation using head X-ray imaging , , and shows the feasibility of sex estimation by combining silhouette images and deep learning. Conventional methods for estimating sex from the skull have involved the use of discriminant functions with manual measurement values , or setting of landmarks in images . These traditional approaches rely on forensic anthropologists manually identifying anatomical features that typically exhibit sexual dimorphism, such as the more robust and larger structures found in male skulls, including prominent superciliary arch, thicker zygomatic arches, larger mastoid processes, marked EOP, a steeper forehead slope, and more pronounced parietal bossing (Fig. ). However, these methods are time-consuming and therefore unsuitable during DVI in large-scale disasters. To address these challenges, we investigated sex estimation using deep learning with 2D images obtained from PMCT and demonstrated the potential usefulness of our automatic and rapid approach. A comparison of the projection angles in the test results revealed the highest accuracy to be at 90° for AlexNet and VGG16 (Table ). Silhouette images likely reflect sex differences in the superciliary arch and EOP, which are clearly observable near the lateral direction. When comparing the deep-learning models, VGG16 showed the highest accuracy in four directions (0°, 30°, 45°, and 90°), as well as the highest AUC value (Fig. ). VGG16 contained more training parameters than AlexNet and could more effectively recognize points with large sex differences. Analysis of Grad-CAM heatmaps revealed sex differences aligned with forensic anthropology findings, notably in the superciliary arch, zygomatic bone, and (Fig. ). Among these, the VGG16 model demonstrated a particular focus on the outline of the skull, corroborating previous studies that reported VGG16’s superior reliability over Alexnet for visual assessment . Specifically, Grad-CAM analysis showed that in correctly identified cases, the superciliary arch was emphasized in 93% of male cases (27/29), while the EOP was recognized in all female cases (31/31), highlighting a clear sex difference. These results suggest that the deep learning models, particularly VGG16, rely on key anatomical features such as the superciliary arch for males and the EOP for females in sex estimation. Additionally, the zygomatic bone was also identified as influential features in some cases. This observation is consistent with traditional forensic methods, which also emphasize these regions for sex determination. However, it is important to note that these models, including VGG16, occasionally produced heatmaps with warm colors outside the skull, indicating potential areas of model uncertainty or error. In the majority vote, an odd number of projection angles were used because cases wherein over half the angles were identified were regarded as true (e.g., over four out of seven angles). The majority vote in the seven directions did not improve the performance (Table ). This may be because the decision points of deep-learning models were the same in five oblique directions (15–75°), wherein the heatmaps output by Grad-CAM represent similar positions (zygomatic bone). In fact, the test results in seven directions tended to misidentify the same cases, particularly in these five directions. The majority vote in three directions (0°, 45°, and 90°) improved the performances of AlexNet and VGG16, with a maximum accuracy of 91.7%. According to the Grad-CAM heatmaps, both deep-learning models recognized different points in these three directions (0°, zygomatic bone; 45°, zygomatic bone and mastoid process; 90°, superciliary arch and EOP), and the effects of the majority vote were evident. This improvement in accuracy indicates that it is important to train the models with the skull outline obtained from multiple projection angles, that is, the 3D skull shape. However, Matsunobu et al. [37] reported that 3D images contain more information than 2D images and therefore require more computation time, making them unsuitable for DVI. Therefore, developing a model that can learn multiple 2D contour shapes is required to improve accuracy. Our approach focuses on silhouette images, which capture the skull’s edge contours, including sexually dimorphic features like parietal bossing, supraorbital ridge elevation, and the external occipital protuberance. Silhouette images can also alleviate the effects of decay and partial defects in the body by filling the skull contours. This is particularly useful in forensic cases where bodies found during disasters often exhibit defects or damage to the skull. In fact, approximately 10% of the PMCT images used in this study had sphenoid and temporal bone defects. The area around the sphenoid bone, which is relatively thin, tends to perish easily and cannot be depicted well. Surface-rendered and volume-rendered images, which represent unevenness, depict defects that may affect deep learning. In contrast, 2D silhouette images represent only outline information, excluding internal contrast details that may be influenced by decay or damage, making them more robust for analysis even when parts of the skull are damaged. Therefore, compared to traditional manual landmark-setting methods, our deep learning approach using 2D silhouette images can significantly reduce the time required for sex estimation by automating the process. This makes it particularly valuable in forensic contexts, especially in large-scale disasters where quick and reliable identification is critical. Furthermore, the ability of silhouette images to effectively represent key features, even in cases of partial skull damage, highlights the utility of our method in various forensic scenarios. In the case of large-scale disasters, silhouette images can be created using a portable X-ray system, even if PMCT imaging is difficult to perform. This means that this study can be adapted for DVI even in countries where PMCT is unavailable. Accordingly, this study indicates that 2D silhouette images created from X-ray images are useful for estimating the probable sex of victims as a novel method in DVI. Integrating deep learning with PMCT offers significant advantages, especially in regions where traditional autopsies may be culturally or ethically challenging, such as Japan and Korea. PMCT provides a non-invasive alternative that is likely to be more acceptable in such contexts. Additionally, the combination of deep learning and PMCT enhances the efficiency and accuracy of sex estimation, which is crucial in scenarios requiring rapid identification, like large-scale disasters. While this approach is valuable for efficiently managing large volumes of cases and reducing the workload for forensic experts, it is essential to apply these advanced techniques with careful consideration of ethical implications, particularly concerning privacy and transparency, to ensure sensitive information is handled appropriately. Limitations Our study had several limitations. First, the dataset was collected from a single institution and consisted of 264 cases, which limits the generalizability of our findings across different populations and age groups. We did not explore how sex estimation accuracy might vary based on population diversity or age. However, individuals under 18 years of age, whose growth and secondary sexual characteristics were not fully developed, were excluded from this study. Future studies should aim to include a broader and more diverse dataset to assess the impact of population and age variation on sex estimation accuracy. Second, the entire skull was used to train deep-learning models. The proposed approach cannot be used to identify cases that differ significantly from the usual shape because cases with large defects on the skull outline were excluded in this study. Sex estimation from a part of the skull also must be considered in future studies. Finally, the number of training images was insufficient. The maximum accuracy was 89.8% and further improvements in performance are required for actual personal identification. Therefore, deep learning should be applied with additional training images to solve this problem. In addition, considering the rapid progress in deep learning , the development and assessment of AI models for personal identification is a future challenge. Our study had several limitations. First, the dataset was collected from a single institution and consisted of 264 cases, which limits the generalizability of our findings across different populations and age groups. We did not explore how sex estimation accuracy might vary based on population diversity or age. However, individuals under 18 years of age, whose growth and secondary sexual characteristics were not fully developed, were excluded from this study. Future studies should aim to include a broader and more diverse dataset to assess the impact of population and age variation on sex estimation accuracy. Second, the entire skull was used to train deep-learning models. The proposed approach cannot be used to identify cases that differ significantly from the usual shape because cases with large defects on the skull outline were excluded in this study. Sex estimation from a part of the skull also must be considered in future studies. Finally, the number of training images was insufficient. The maximum accuracy was 89.8% and further improvements in performance are required for actual personal identification. Therefore, deep learning should be applied with additional training images to solve this problem. In addition, considering the rapid progress in deep learning , the development and assessment of AI models for personal identification is a future challenge. This study demonstrates the potential of utilizing 2D silhouette images from postmortem CT scans for rapid and accurate sex estimation in forensic science. By applying deep learning techniques, key anatomical features such as the superciliary arch and external occipital protuberance were effectively identified, yielding high levels of accuracy. The proposed method is fast, robust, and provides valuable support for personal identification in various forensic contexts. |
Biological Hazards and Indicators Found in Products of Animal Origin in Cambodia from 2000 to 2022: A Systematic Review | a138ef12-f6de-47d0-925a-ba86873783ee | 11675544 | Microbiology[mh] | Foodborne diseases (FBDs) are a critical concern on a global scale, as contaminated food can lead to more than 200 health issues, such as gastrointestinal, gynecological, and immunological disorders, as well as cancer. The majority of these illnesses can be effectively prevented . Safeguarding food safety is a joint responsibility among various national authorities and requires an integrated, multisectoral approach that aligns with the principles of the one-health approach . Regions in Africa and Southeast Asia bear the highest burden (more than 90%) of foodborne diseases . The burden of FBDs in low- and middle-income countries (LMICs) comes from biological hazards, especially bacteria, parasites, viruses, and biotoxins, contaminating perishable food products mostly accessed from traditional markets . Many food poisoning cases are attributed to the consumption of products of animal origin (POAOs) . The leading causes of FBDs are primarily biological hazards, especially norovirus and Campylobacter spp. Furthermore, non-typhoidal Salmonella enterica followed by Salmonella Typhi, Taenia solium , and the hepatitis A virus are major factors in the fatalities associated with FBDs . The role of the National Food Control System (NFCS) is crucial in safeguarding consumer health and promoting fair practices in food trade . Nevertheless, the ability to effectively handle POAOs safely is generally weak in LMICs due to the limited ability to comply with food safety regulations and hygiene practices. Moreover, foodborne outbreaks in LMICs are often underestimated or not properly recorded because of the lack of sustainable surveillance systems . The safety of POAOs is becoming a more significant concern due to higher levels of food consumption and longer, more complex supply chains that can lead to contamination by various hazards, including bacteria and parasites . In the Kingdom of Cambodia (Cambodia), diarrhea accounts for approximately 8% of the mortality among children under the age of five and is linked to the ingestion of unsafe food. However, the specific causal agents of most diarrheal cases remain unidentified . Between 2014 and 2023, the Cambodia Communicable Disease Control (C-CDC) reported a total of 178 incidents, resulting in 7224 cases and 180 fatalities attributed to FBDs. Many outbreaks have been primarily associated with the consumption of contaminated fish noodles and naturally toxic pufferfish . There are also instances when the identification of hazards and the investigation of their causal agents remain incomplete . The Cambodian NFCS is coordinated by the Council of Agriculture and Rural Development (CARD) of Cambodia, alongside six different line ministries, known as the Food Safety Working Group (FSWG), and other sub-committees, including the Foodborne Outbreak Report Team (FORT) which has the obligation to response to FBD outbreaks. The Ministry of Agriculture, Forestry, and Fisheries (MAFF) is mainly responsible for overseeing the safety of the primary production of POAOs . Cambodia currently lacks a well-established surveillance system for foodborne pathogens, which consequently hinders the availability of comprehensive data. Nonetheless, an event-based surveillance system is in operation to collect reports related to public health events from a wide range of sources, including media channels and the public, who can report incidents via a dedicated hotline by FORT . To date, there is no known comprehensive systematic review that examines the evidence of foodborne biological hazards associated with POAOs in Cambodia, especially in the context of bolstering the NFCS. Our review aims to access the quantity of biological hazards (bacteria, biogenic amines, biotoxins, parasites, toxin-producing fungi, and viruses) and any associated information (type of study initiative, year of publication, food source, stage of value chain, and location) on POAOs in Cambodia over a period of 22 years, with the intention of gathering data and identifying the existing gaps to enhance the NFCS. The specific objectives are (1) to identify foodborne biological hazards and their indicators as detected in POAOs; (2) assess the quantity of contamination of biological hazards which have been found to exceed the Cambodian, Codex Alimentarius Commission, and European Union standards/recommended limits; (3) evaluate whether there is any association between the presence of a hazard and the level of hazard with the type of value chain and its inherent practices; and (4) reveal evidence of biological hazards reported in different provinces. 2.1. Protocol Development and Registration The review protocol was developed based on the Preferred Reporting Items for Systematic Review and Meta-Analysis protocols (PRISMA-P) 2015 statement . The concept of the protocol followed the previous review studies conducted by the International Livestock Research Institute (ILRI) for the Feed the Future Initiatives of USAID . The protocol was registered to PROSPERO in March 2023. The registration number PROSPERO 2023 CRD42023409476 can be found at the following weblink: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42023409476 (accessed on 27 March 2023). The definitions of the key terms used in this review were listed in . 2.2. Eligibility Criteria The methods of this review followed the established “PRISMA” guidelines updated in 2020 . 2.2.1. Inclusion Criteria The inclusion criteria of the review were studies in English with a timeline from 1 January 2000 to 31 December 2022. The types of studies included were observational studies and reviews. The studies stated the prevalence and related information such as sampling and testing methods, and stages of the production chain of animals and POAOs were included. 2.2.2. Exclusion Criteria The exclusion criteria were studies in any other languages such as Khmer and studies focusing on non-foodborne hazards. Additionally, laboratory-based antimicrobial resistance-related studies without any information on sampling locations, sample numbers, analytical methods, and prevalence as well as the population outside Cambodia and studies with no prevalence data were not included in the review. Furthermore, studies of products not of animal origins were excluded. 2.3. Databases and Search Strategy The search databases are Scopus, PubMed, and Google Scholar. The study selection included “(Foodborne OR “food borne” OR food-borne OR “food safety” OR “food related” OR “food associated” OR “food derived” OR “food* illness” OR “food* disease*” OR “food* intoxica*” OR “food pathogen” OR “food* poison*” OR “food* microb*” OR “food* vir*” OR “food parasit*” OR “food* toxin” OR “food* contamina*” OR “food* hazard*” AND (Cambodia*))”. Boolean operators (AND, OR, NOT, or AND NOT) were used to combine or exclude the keywords in the search databases. 2.4. Screening and Study Selection All the search results from the three databases were compiled in a single Excel sheet and duplicates of the studies were removed in Microsoft Excel. The publication titles and abstracts were screened based on the inclusion and exclusion criteria of the study protocol. The screening was solely conducted by the first author, Shwe Phue San, (S.P.S) with the guidance of the supervisory team and external contributors. Full paper reviews were carried out manually. Full papers linked to the accepted abstracts were sought and acquired. 2.5. Quality Assessment Criteria For the quality control, each selected paper was assessed using the following four quality criteria questions (for details, see ). Is the study method scientifically sound? Is the laboratory method used for testing biological hazards appropriate? Are the descriptions of data analysis for key outputs (prevalence or concentration) accurate and precise? Are the results and findings clearly stated? These quality assessment criteria were adapted from the previous systematic literature review (SLR) conducted by ILRI . The studies were classified as “good”, “medium”, and “poor” and only good- and medium-rated studies were selected for data extraction (see ). The selected studies were presented to the research team for review and feedback. 2.6. Data Extraction Articles found to be of acceptable quality after the full-text screening were considered for data extraction. The population of interest for the review was biological hazards including biogenic amines and biotoxins detected in food-sourced animals and in POAOs at any stage of the production chain. The extracted data included the type of animal and POAO, type and name of biological hazard, prevalence- and concentration-related information (total number of samples, number of positive results, type and stage of the production chain, geographical location, sample size, sampling method, analytical method, and year of publication), and type of initiative (national initiative or initiatives of the international institutions or joint initiatives between national and international institutions). 2.7. Data Analysis Findings were heterogeneous and thus were primarily conveyed through descriptive analysis. However, the findings of the different types of parasites found in fish and fishery products yielded sufficient data to conduct a statistical analysis. One-way ANOVA was calculated to see if there were significant differences between the mean values of the prevalence of parasites yielded from three different types of sampling points (nature, village, and market). “Nature” means the samples were taken from natural habitats such as lakes, rivers, and seashores, whereas “villages” refers to the samples collected from the villages where the stage of the chain can be end-consumers or small sellers. Finally, “market” means the samples were purchased from retail or wholesale markets. Tukey’s test, also known as the Honestly Significant Difference (HSD) test, was subsequently used to find the mean values that are significantly different from each other. 2.8. Calculation of DALY/Population of Cambodia The calculation of Disability-Adjusted Life Years (DALY) per population in Cambodia was based on the estimation provided by the World Health Organization (WHO) . The population of Cambodia, according to the World Bank data from 2022, was 16,767,842 . The review protocol was developed based on the Preferred Reporting Items for Systematic Review and Meta-Analysis protocols (PRISMA-P) 2015 statement . The concept of the protocol followed the previous review studies conducted by the International Livestock Research Institute (ILRI) for the Feed the Future Initiatives of USAID . The protocol was registered to PROSPERO in March 2023. The registration number PROSPERO 2023 CRD42023409476 can be found at the following weblink: https://www.crd.york.ac.uk/prospero/display_record.php?ID=CRD42023409476 (accessed on 27 March 2023). The definitions of the key terms used in this review were listed in . The methods of this review followed the established “PRISMA” guidelines updated in 2020 . 2.2.1. Inclusion Criteria The inclusion criteria of the review were studies in English with a timeline from 1 January 2000 to 31 December 2022. The types of studies included were observational studies and reviews. The studies stated the prevalence and related information such as sampling and testing methods, and stages of the production chain of animals and POAOs were included. 2.2.2. Exclusion Criteria The exclusion criteria were studies in any other languages such as Khmer and studies focusing on non-foodborne hazards. Additionally, laboratory-based antimicrobial resistance-related studies without any information on sampling locations, sample numbers, analytical methods, and prevalence as well as the population outside Cambodia and studies with no prevalence data were not included in the review. Furthermore, studies of products not of animal origins were excluded. The inclusion criteria of the review were studies in English with a timeline from 1 January 2000 to 31 December 2022. The types of studies included were observational studies and reviews. The studies stated the prevalence and related information such as sampling and testing methods, and stages of the production chain of animals and POAOs were included. The exclusion criteria were studies in any other languages such as Khmer and studies focusing on non-foodborne hazards. Additionally, laboratory-based antimicrobial resistance-related studies without any information on sampling locations, sample numbers, analytical methods, and prevalence as well as the population outside Cambodia and studies with no prevalence data were not included in the review. Furthermore, studies of products not of animal origins were excluded. The search databases are Scopus, PubMed, and Google Scholar. The study selection included “(Foodborne OR “food borne” OR food-borne OR “food safety” OR “food related” OR “food associated” OR “food derived” OR “food* illness” OR “food* disease*” OR “food* intoxica*” OR “food pathogen” OR “food* poison*” OR “food* microb*” OR “food* vir*” OR “food parasit*” OR “food* toxin” OR “food* contamina*” OR “food* hazard*” AND (Cambodia*))”. Boolean operators (AND, OR, NOT, or AND NOT) were used to combine or exclude the keywords in the search databases. All the search results from the three databases were compiled in a single Excel sheet and duplicates of the studies were removed in Microsoft Excel. The publication titles and abstracts were screened based on the inclusion and exclusion criteria of the study protocol. The screening was solely conducted by the first author, Shwe Phue San, (S.P.S) with the guidance of the supervisory team and external contributors. Full paper reviews were carried out manually. Full papers linked to the accepted abstracts were sought and acquired. For the quality control, each selected paper was assessed using the following four quality criteria questions (for details, see ). Is the study method scientifically sound? Is the laboratory method used for testing biological hazards appropriate? Are the descriptions of data analysis for key outputs (prevalence or concentration) accurate and precise? Are the results and findings clearly stated? These quality assessment criteria were adapted from the previous systematic literature review (SLR) conducted by ILRI . The studies were classified as “good”, “medium”, and “poor” and only good- and medium-rated studies were selected for data extraction (see ). The selected studies were presented to the research team for review and feedback. Articles found to be of acceptable quality after the full-text screening were considered for data extraction. The population of interest for the review was biological hazards including biogenic amines and biotoxins detected in food-sourced animals and in POAOs at any stage of the production chain. The extracted data included the type of animal and POAO, type and name of biological hazard, prevalence- and concentration-related information (total number of samples, number of positive results, type and stage of the production chain, geographical location, sample size, sampling method, analytical method, and year of publication), and type of initiative (national initiative or initiatives of the international institutions or joint initiatives between national and international institutions). Findings were heterogeneous and thus were primarily conveyed through descriptive analysis. However, the findings of the different types of parasites found in fish and fishery products yielded sufficient data to conduct a statistical analysis. One-way ANOVA was calculated to see if there were significant differences between the mean values of the prevalence of parasites yielded from three different types of sampling points (nature, village, and market). “Nature” means the samples were taken from natural habitats such as lakes, rivers, and seashores, whereas “villages” refers to the samples collected from the villages where the stage of the chain can be end-consumers or small sellers. Finally, “market” means the samples were purchased from retail or wholesale markets. Tukey’s test, also known as the Honestly Significant Difference (HSD) test, was subsequently used to find the mean values that are significantly different from each other. The calculation of Disability-Adjusted Life Years (DALY) per population in Cambodia was based on the estimation provided by the World Health Organization (WHO) . The population of Cambodia, according to the World Bank data from 2022, was 16,767,842 . A total of 8291 records were obtained from the three search databases . Following the elimination of duplicates in Excel, the number of records available for screening was reduced to 8221. Subsequently, during the “title screening” phase, 7968 records were excluded as they were found to be unrelated to food safety hazards. Consequently, the abstracts of 253 records were assessed, leading to the exclusion of 80 reports that focused on human cases rather than animals or POAOs. Ultimately, a total of 46 records were thoroughly reviewed, while 117 records were excluded due to various reasons outlined in . shows the list of the records included in the review. 3.1. Different Types of Research Initiatives As shown in , only 6 out of the 46 studies were undertaken by national research institutions without the involvement of international partners in terms of funding or technical support. In contrast, partnerships between national and two or more international institutions led to 15 studies being carried out, with financial or technical assistance provided by these collaborators. Among the bilateral studies, those conducted with Thailand had the highest frequency, followed by Sweden. The joint studies primarily involved countries from Europe. Additionally, there was one study each conducted in collaboration with Australia and the WHO under joint initiatives. The Republic of Korea, the United States of America (USA), and the French Republic (France) conducted a total of 13 studies without any partnership or collaboration with local institutions. In our review, all the studies conducted by the research institutions in the Republic of Korea focused on parasite contamination in fishery products. Except for the Kingdom of Thailand (Thailand), there was no evidence of bilateral research collaboration involving Cambodia and the neighboring countries of Laos PDR and The Socialist Republic of VietNam (Vietnam) within the scope of our review. Similarly, we did not identify any bilateral research collaboration between Cambodia and the People’s Republic of China (China), even in light of the Cambodia–China free trade agreement established in 2020 . 3.2. Frequency of Studies for Different Types of Biological Hazards As shown in , half of the total number of studies conducted focused on parasites in fish and fishery products and other POAOs, making it the most extensively researched area. Nearly one-third of the studies on parasites were carried out by researchers from the Republic of Korea. Following but not closely were studies on bacterial hazards in POAOs. Viruses, biogenic amines, and biotoxins accounted for a smaller proportion of studies, with five, four, and three studies, respectively. Additionally, we did not identify any studies focusing on the concentration of toxin-producing fungi or the prevalence of Norovirus and hepatitis A viruses detected in POAOs. Additionally, none of the studies included in our review identified the pathogenic strains of Escherichia coli ( E. coli ). Consequently, the E. coli referenced in the review are considered to be interpreted as hygiene indicators. 3.3. Number of Publications from 2000 to 2022 As shown in , food safety research studies in Cambodia have been steadily rising since 2000. Specifically, the research frequency concerning food biological hazards in animals and POAOs more than doubled between 2011 and 2015, reaching its peak from 2016 to 2020. The latest year in our study (2021–2022) constituted 21% of the total studies retrieved. Our review found a total of four studies related to parasites and viral hazards from 2000 to 2005, with two studies focusing on viruses and the other two on parasites. Although the investigation of antibiotic resistance in Salmonella (S.) and Campylobacter species in retail poultry began in 2011, most of the antibiotic resistance or susceptibility studies reviewed were conducted after the year 2015. 3.4. Number of Reviewed Studies in Each Province in Cambodia Most studies (n = 22) were conducted in Phnom Penh, the capital of Cambodia. Despite having comparable population densities, the studies on foodborne hazards in Kandal, Prey Veng, and Siem Reap did not reflect the same level of research activity with 11 studies conducted in Kandal province and 4 records were documented for the province of Prey Veng . 3.5. Evidence of Biological Hazards Reported in POAOs in Cambodia In addition to bacteria, parasites, and viruses, biogenic amines and biotoxins were reported in the studies reviewed. Foodborne bacteria such as Brucella species, Campylobacter species, Clostridioides ( Cl. ) difficile , Salmonella species, and Vibrio ( V. ) species were observed. While Brucella species and Cl. difficile were found to have a low prevalence, the remaining bacterial hazards were detected at high levels. Additionally, high levels of hygiene indicators like E. coli and Staphylococcus ( Staph .) aureus were reported. The prevalence of Salmonella species and Staph. aureus was highlighted in the review as an indicator of potential hazards. Furthermore, the presence of astrovirus and Nipah viruses in bats was noted. The detection of hepatitis E virus in pigs and pork products was also reported. Lastly, various parasites were detected in cattle, buffalo, pigs, and fishery products in Cambodia. A summary of the evidence reported by the 46 studies included in our review is provided as . 3.6. Prevalence of Bacterial Hazards and Hazard Indicators in POAO In this review, 12 out of 46 studies (26%) had evidence of bacterial hazard prevalence in food-sourced animals and POAOs in Cambodia from 2000 to 2022. In addition to this, hazard indicators such as E. coli in POAOs and the prevalence of pathogenic bacteria on cutting boards used for chicken and pork meat at traditional markets were included. 3.6.1. Brucella spp. Out of the 12 studies reviewed for bacterial hazards and hazard indicators, only one study focused on Brucella spp. in cattle and swine . As part of an animal disease surveillance program, a total of 1141 serological samples were collected from slaughterhouses in Takeo province, Cambodia, and screened with commercial enzyme-linked immunosorbent assay (ELISA) test kits, and doubtful samples were tested by real-time Polymerase Chain Reaction (PCR). These samples included 477 from cattle and 664 from swine. The seroprevalence of Brucella spp. in cattle was found to be 0.2%, while in swine it was 0.15%. 3.6.2. Campylobacter spp. In the review, two studies provided information on the prevalence of Campylobacter jejuni , Campylobacter coli , and Campylobacter lari in livestock and meat samples collected from various farms in Kampong Cham, Battambang, and Kampot provinces, and Phnom Penh. The 1005 samples were taken from chickens, ducks, cattle, pigs, water buffalo, quail, pigeons, geese, and pork carcasses. The studies utilized culture methods, PCR methods, and the ISO 10272-1 requirement to detect the Campylobacter spp. The PCR was more sensitive in detecting Campylobacter spp. than the culture method . Among the different livestock species, pigs exhibited the highest prevalence of Campylobacter spp., with a prevalence of 72%. This was followed by 56% in chickens and 24% in ducks . On the other hand, another study revealed 80.9% of Campylobacter species in pork carcasses in Phnom Penh . 3.6.3. Clostridioides ( Cl. ) difficile One study included in our review reported the first evidence of the presence of Cl. difficile in smoked and dried freshwater fish, specifically from Battambang, Kampong Chhnang, and Kampong Cham in Cambodia. However, the samples obtained from Kampong Thom and Siem Reap provinces did not exhibit the presence of the bacteria. Out of the 25 samples collected directly from the markets in the five provinces, 4 were found to be positive for Cl. difficile and were resistant to Clindamycin upon testing. Furthermore, after undergoing molecular analysis, three out of the four positive samples revealed the presence of toxicity genes A and B; however, none of the samples exhibited the gene fraction associated with the binary toxin CDT . 3.6.4. Escherichia coli ( E. coli ) Four out of the forty-six articles of our review focused on determining the presence of E. coli in various food products such as fishery products, poultry, and pork. In total, 1327 samples were from slaughterhouses and markets located in Phnom Penh, Bantaey Meanchay, and Siem Reap provinces . These samples included fecal samples, broiler rectal swabs, carcass swabs, chicken caeca, chicken neck skins, rinse water, chopping boards, fish, and fermented fish known as Prahok. The prevalence of E. coli varied from being undetected in fermented fish to as high as 89.4% in chickens. The analytical methods used in these studies included the Afnor validation method, Biorad-Rad 07/01-07/93 and BRD 0717-12/04, methods adapted from the U.S. Food and Drug Administration (USFDA)’s BAM, and the ISO 9308-1 method for E. coli detection. 3.6.5. Salmonella spp. The prevalence of Salmonella spp. in various food sources and processing sites in Cambodia had been investigated in three studies included in the review . In a study, a total of 684 samples were collected from traditional markets across all 25 provinces of the entire Cambodia . These samples included chicken meat, cutting boards used for chicken, pork, pork carcasses, and cutting boards used for pork. The analysis conducted following the ISO6579:2002 standard revealed a prevalence of 42.6% Salmonella spp. in chicken meat, 41.9% in cutting boards used for chicken, 45.1% in pork, and 30.6% in cutting boards used for pork. Similarly, another study reported 88.2% positive findings of Salmonella spp. in pork carcass samples collected from the markets in Phnom Penh . On the other hand, a study focused on the prevalence of Salmonella spp. in a specific fishery product called Prahok at the processing sites located in Siem Reap . They collected 28 samples and analyzed them using the BAM method of USFDA and reported a prevalence of 3.5% Salmonella spp. in fermented fish. These findings highlighted the presence of Salmonella in various food sources and processing sites in Cambodia, emphasizing the need for appropriate food safety measures to prevent the transmission of this pathogen to consumers. According to the microbiological criteria of the EU, Salmonella must be absent in 25 g samples, in accordance with the sampling requirements laid down in the regulation. . Salmonella ( S. ) enterica In the review, three studies revealed the prevalence of S. enterica in poultry, pork, and fishery products . In total, 1299 samples were collected from fish and fishery products, poultry products, and pork meat at the slaughterhouses and markets in Phnom Penh, Banteay Meanchey, and Siem Reap provinces. These samples included fecal samples, broiler rectal swabs, carcass swabs, chicken caeca, chicken neck skins, rinse water, chopping boards, and fish. The analytical methods were molecular identification and standard method ISO6579:2002 (E) for the detection of Salmonella in food. The prevalence of S. enterica ranged from 6% in broiler chickens to 100% in pig carcass samples at slaughterhouses. 3.6.6. Staphylococcus ( Staph .) aureus In the review, only one study examined the occurrence of Staph. aureus in various samples obtained from traditional markets across all 25 provinces in Cambodia . A total of 532 samples were gathered, including those from chicken, cutting boards used for chicken, pork, and cutting boards used for pork. The samples underwent testing for the presence/absence and quantification of coagulase-positive staphylococci (CPS) in accordance with ISO 6888-1:1999. The prevalence of Staph. aureus was found to be 38.2% in chicken samples, 17.7% in cutting boards used for chicken, 28.9% in pork meat samples, and 11.3% in cutting boards used for pork. 3.6.7. Vibrio ( V .) Species Only one study in the review revealed the potential Vibrio risks in fermented fishery products (Prahok) . A total of 28 samples were gathered from the processing facilities in Siem Reap province and subjected to an examination to identify Vibrio species using the partial adaptation method outlined in the BAM. The two different Vibrio tests yielded conflicting outcomes regarding the presence of Vibrio spp. The CHROMagar Vibrio test by using a chromogenic medium suggested the potential existence of V. parahaemolyticus , V. vulnificus , V. cholerae , and V. alginolyticus based on distinct colony colors. However, the Thiosulfate–citrate–bile salts–sucrose (TCBS) agar test indicated a negative result for Vibrio in all the samples. 3.7. Evidence of Antimicrobial Resistance Genes in Several Bacteria Found in POAO Six studies presented the high percentages of antibiotic resistance in E. coli , Salmonella , and Campylobacter species, as well as the emergence of extended-spectrum beta-lactamase (ESBL)-producing S. enterica in slaughterhouses, markets, and retail meats in Phnom Penh and Banteay Meanchey, Cambodia . A total of 1798 samples were collected from feces, carcasses, rectal swabs, skin, rinsed water, and chopping boards, with food source animals including fish, pigs, pork, and chicken. In addition, Cl. difficile isolated from smoked and dried freshwater fish showed their resistance to antibiotic clindamycin . 3.8. Evidence of Parasitic Hazards in Product of Animal Origins A total of 23 out of 46 studies in our review revealed the presence of different types of parasites in POAOs in Cambodia. Among these, three studies highlighted the prevalence of Faciola and Sarcocystic species in cattle and buffalo, one study identified the contamination of Gnathostoma spinigerum in edible frogs, five studies demonstrated the presence of different parasites in pigs or pork, and thirteen studies concentrated on the evidence of various parasites in fish and fishery products . The parasites were identified morphologically including the use of electronic microscopy, ELISA method, and molecular methods. 3.8.1. Fasciola spp. The evidence of prevalence of Faciola spp. in cattle ranged from 5% to 20% . A total of 2391 fecal samples were collected from villages in Kampong Speu and Pursat provinces. Individual nematode egg counts were performed on the fecal samples using the quantitative McMaster method with a sensitivity of 50 eggs per gram of feces (EPG). The identification of gastrointestinal nematode genera was based on the morphological analysis of third-stage larvae sourced from the coprocultures of pooled samples. Fasiola gigantica In our review, two studies reported positive findings of Fasciola gigantica in cattle and buffaloes. The prevalence of bovine fasciolosis in Cambodia posed a risk to approximately 28% of cattle and buffaloes . The study revealed 11.4% positive results (160 out of 1046 samples) for Fasciola gigantica . The fecal samples were collected from 11 provinces, namely Kandal, Kratie, Takeo, Kampong Speu, Kampong Cham, Pursat, Battambang, Kampong Thom, Kampong Chhnang, Prey Veng, and Svay Rieng, and a modified version of the Balivet egg count technique was used for analysis. Notably, Kandal province showed the highest positive rate, reaching 56.8%. In addition, another study reported 16.37% positive results of Fasciola gigantica from 171 fecal samples collected from villages in Pursat province and analyzed by using the Modified Balivat Fasciola egg counting technique . 3.8.2. Gnathostoma spinigerum A study included in the review found out a significant proportion of edible frog ( Hoplobatrachus rugulosus ) samples obtained from the market in Phnom Penh were contaminated with a parasite known as Gnathostoma spinigerum , with a prevalence rate of 60% . However, no traces of this parasite were detected in the 10 edible frog samples collected from Takeo province, as well as in the 34 snakehead fish samples taken from the markets in Phnom Penh, Takeo, and Pursat provinces. This highlights the variation in the prevalence of Gnathostoma spinigerum among different regions and species, emphasizing the importance of monitoring and controlling the spread of this parasite to ensure food safety and public health. 3.8.3. Sarcocystis Species One of our review studies presented a 100% prevalence of Sarcocystis species, namely Sarcocystis heydorni and Sarcocystis cruzi , in the cardiac tissues of both cattle and buffaloes. Eight samples were collected from the hearts of these animals in Siem Reap province. The samples were subjected to microscopic examination and the presence of foodborne zoonotic pathogens was confirmed using molecular methods . 3.8.4. Parasites in Pig/Pork Meat The evidence of various types of parasites in pig and pork meat was examined through a review of five studies . Please see below for details. 3.8.5. Parasites in Fish and Fishery Products A total of 14 papers included in the review examined the prevalence of different parasite types in fish and fishery products . Over the past 22 years, more than 9709 samples have been analyzed to detect parasites in fishery products. These studies were conducted in 10 out of the 25 provinces in Cambodia, namely Phnom Penh, Pursat, Kampong Cham, Takeo, Kratie, Kandal, Steng Trung, Siem Reap, Kampong Thom, and Prey Veng. The samples were collected from various sources such as lakes, rivers, aquaculture sites, the sea, villages, and markets, encompassing both pre-harvest and post-harvest stages. The detection methods employed in these studies were both morphological and molecular techniques. displays the mean prevalence of various types of parasites found in fish and fishery products. The prevalence of Haplorchis pumilio was found to have the highest mean value, reaching 70%. On the other hand, Haplorchis yokogawei had the lowest mean value of prevalence, which was recorded as 15.35%. Additionally, there were other parasites present, including Pygidiopsis cambodiensis n. sp., Stellantchasmus falcatus , Gnathostoma spinigerum , Procerovum sp., Centrocestus formosanus , Artyfechinostomum malayanum , Echinostoma mekongi , and Angiostrongylus cantonensis . These hazards were categorized under “other parasites” due to the limited sample size for each parasite, making it impractical to present their individual mean values. The mean value of the prevalence of parasites in the samples collected from markets was found to be 58.85%, while the mean values for the samples taken from nature and villages were 16.46% and 38.45%, respectively. We found that the prevalence of zoonotic parasites in fishery products ranged from 0.25% to 60% in the samples taken from nature (lakes and rivers), from 10% to 90% in the samples taken from villages, and from 6.7% to 100% in the samples taken from markets. According to the one-way ANOVA test, p value 0.0067 was observed, and thus, the differences in the mean values were highly significant. Consequently, Tukey’s test was conducted to identify the specific differences between the individual mean values. The results indicated that the mean values of the “nature” and “market” samples were significantly different, whereas the mean value of the “villages” sample did not differ significantly from either the “nature” or “market” samples. The letter codes (compact letter display) show the results of Tukey post hoc multiple comparisons: bars with the same letter are not significantly different at the p = 0.05 level. 3.9. Evidence of Viral Hazards in POAO From 2000 to 2022, a total of five studies were carried out in Cambodia, focusing on viruses found in bats and pigs were included in the review . 3.9.1. Astrovirus In our review, we included evidence from a study that reported the prevalence of astrovirus which was found to be over 5% in bat samples collected from farms in Ratanakiri, Stung Treng, and Prey Veng provinces . In addition to fecal samples, rectal, oral, and tissue samples were also collected, and a semi-nested PCR method was used for the identification of astrovirus. 3.9.2. Nipah Virus The review included three studies that specifically examined the Nipah virus in bats . A total of 5867 samples were gathered from serum and urine utilizing serological methods. The samples were collected from roosts located in Phnom Penh, Battambang, Kampong Cham, Kandal, Prey Veng, and Siem Reap Provinces. The highest prevalence of the Nipah virus was identified in samples taken from restaurants in Kampong Cham (11.5%), whereas the samples from the natural environment in Battambang and Kandal exhibited the lowest prevalence of less than 2%. 3.9.3. Hepatitis E Virus Included in the review was a study that provided evidence of the presence of genotype 1 hepatitis E virus in fecal and serum samples obtained from pig farms in Phnom Penh . The study reported a positive finding rate of 12.15% out of the 181 samples after undergoing a molecular analysis. 3.10. Biogenic Amines In the review, four studies revealed different concentrations of biogenic amines in fish and fishery products in seven provinces of Cambodia, namely Phnom Penh, Kampong Som/Sihanouk Ville, Battambang, Kampong Chhnang, Kampong Cham, Kampong Thom, Kandal, and Siem Reap . However, a study did not specify the exact location of the sampling point . A total of 100 samples were collected from natural sources (such as lakes), fishponds, processing sites, cold storage facilities, and shops. The concentration of biogenic amines was determined and confirmed using advanced techniques including High-performance liquid chromatography with fluorescence detector (HPLC-FLD), Ultra-performance liquid chromatography (UPLC), and liquid chromatography–mass spectrometry (LC-MS). Two studies reported low concentrations of histamine in freshwater fish ranging from “not detected” to 24.2 ppm . Likewise, another study reported low levels of histamine in both freshwater and marine fishes ranging from 5.32 to 9.23 ppm . On the other hand, the highest concentration of biogenic amines, particularly histamine (>500 ppm) and tyramine (>600 ppm), in different types of fermented fishery products were reported . These findings exceeded both the Cambodian National limit of 100 ppm and the European Union limit of 200 ppm . Except for fish sauce, there are no defined maximum limits (MLs) for histamine in other fishery products in Cambodia. 3.11. Biotoxins Three studies that revealed the concentrations of paralytic shellfish toxins and tetrodotoxin in Mekong pufferfish and horseshoe crabs were included in our review . A total of 49 samples were gathered from various locations including lakes, seashores, and wet markets in Phnom Penh, Kandal, Kratie, and Sihanouk Ville. The samples were analyzed by HPLC and LC-MS. The evidence of different concentrations of tetrodotoxin and paralytic shellfish toxins in horseshoe crabs and pufferfish is presented in . 3.12. The Estimates of Regional and National Foodborne Disease Burden in Cambodia The Foodborne Disease Burden Epidemiology Reference Group (FERG), established by the World Health Organization (WHO), provided its initial findings on the global and regional impact of foodborne diseases. These findings included estimates of the occurrence, mortality, and overall burden caused by different foodborne hazards. As per the regional classification by WHO, Cambodia falls under the Western Pacific Region (WPR) B . As shown in , the DALYs/population in Cambodia was calculated based on the DALYs/100,000 people in WPR B. As shown in , only 6 out of the 46 studies were undertaken by national research institutions without the involvement of international partners in terms of funding or technical support. In contrast, partnerships between national and two or more international institutions led to 15 studies being carried out, with financial or technical assistance provided by these collaborators. Among the bilateral studies, those conducted with Thailand had the highest frequency, followed by Sweden. The joint studies primarily involved countries from Europe. Additionally, there was one study each conducted in collaboration with Australia and the WHO under joint initiatives. The Republic of Korea, the United States of America (USA), and the French Republic (France) conducted a total of 13 studies without any partnership or collaboration with local institutions. In our review, all the studies conducted by the research institutions in the Republic of Korea focused on parasite contamination in fishery products. Except for the Kingdom of Thailand (Thailand), there was no evidence of bilateral research collaboration involving Cambodia and the neighboring countries of Laos PDR and The Socialist Republic of VietNam (Vietnam) within the scope of our review. Similarly, we did not identify any bilateral research collaboration between Cambodia and the People’s Republic of China (China), even in light of the Cambodia–China free trade agreement established in 2020 . As shown in , half of the total number of studies conducted focused on parasites in fish and fishery products and other POAOs, making it the most extensively researched area. Nearly one-third of the studies on parasites were carried out by researchers from the Republic of Korea. Following but not closely were studies on bacterial hazards in POAOs. Viruses, biogenic amines, and biotoxins accounted for a smaller proportion of studies, with five, four, and three studies, respectively. Additionally, we did not identify any studies focusing on the concentration of toxin-producing fungi or the prevalence of Norovirus and hepatitis A viruses detected in POAOs. Additionally, none of the studies included in our review identified the pathogenic strains of Escherichia coli ( E. coli ). Consequently, the E. coli referenced in the review are considered to be interpreted as hygiene indicators. As shown in , food safety research studies in Cambodia have been steadily rising since 2000. Specifically, the research frequency concerning food biological hazards in animals and POAOs more than doubled between 2011 and 2015, reaching its peak from 2016 to 2020. The latest year in our study (2021–2022) constituted 21% of the total studies retrieved. Our review found a total of four studies related to parasites and viral hazards from 2000 to 2005, with two studies focusing on viruses and the other two on parasites. Although the investigation of antibiotic resistance in Salmonella (S.) and Campylobacter species in retail poultry began in 2011, most of the antibiotic resistance or susceptibility studies reviewed were conducted after the year 2015. Most studies (n = 22) were conducted in Phnom Penh, the capital of Cambodia. Despite having comparable population densities, the studies on foodborne hazards in Kandal, Prey Veng, and Siem Reap did not reflect the same level of research activity with 11 studies conducted in Kandal province and 4 records were documented for the province of Prey Veng . In addition to bacteria, parasites, and viruses, biogenic amines and biotoxins were reported in the studies reviewed. Foodborne bacteria such as Brucella species, Campylobacter species, Clostridioides ( Cl. ) difficile , Salmonella species, and Vibrio ( V. ) species were observed. While Brucella species and Cl. difficile were found to have a low prevalence, the remaining bacterial hazards were detected at high levels. Additionally, high levels of hygiene indicators like E. coli and Staphylococcus ( Staph .) aureus were reported. The prevalence of Salmonella species and Staph. aureus was highlighted in the review as an indicator of potential hazards. Furthermore, the presence of astrovirus and Nipah viruses in bats was noted. The detection of hepatitis E virus in pigs and pork products was also reported. Lastly, various parasites were detected in cattle, buffalo, pigs, and fishery products in Cambodia. A summary of the evidence reported by the 46 studies included in our review is provided as . In this review, 12 out of 46 studies (26%) had evidence of bacterial hazard prevalence in food-sourced animals and POAOs in Cambodia from 2000 to 2022. In addition to this, hazard indicators such as E. coli in POAOs and the prevalence of pathogenic bacteria on cutting boards used for chicken and pork meat at traditional markets were included. 3.6.1. Brucella spp. Out of the 12 studies reviewed for bacterial hazards and hazard indicators, only one study focused on Brucella spp. in cattle and swine . As part of an animal disease surveillance program, a total of 1141 serological samples were collected from slaughterhouses in Takeo province, Cambodia, and screened with commercial enzyme-linked immunosorbent assay (ELISA) test kits, and doubtful samples were tested by real-time Polymerase Chain Reaction (PCR). These samples included 477 from cattle and 664 from swine. The seroprevalence of Brucella spp. in cattle was found to be 0.2%, while in swine it was 0.15%. 3.6.2. Campylobacter spp. In the review, two studies provided information on the prevalence of Campylobacter jejuni , Campylobacter coli , and Campylobacter lari in livestock and meat samples collected from various farms in Kampong Cham, Battambang, and Kampot provinces, and Phnom Penh. The 1005 samples were taken from chickens, ducks, cattle, pigs, water buffalo, quail, pigeons, geese, and pork carcasses. The studies utilized culture methods, PCR methods, and the ISO 10272-1 requirement to detect the Campylobacter spp. The PCR was more sensitive in detecting Campylobacter spp. than the culture method . Among the different livestock species, pigs exhibited the highest prevalence of Campylobacter spp., with a prevalence of 72%. This was followed by 56% in chickens and 24% in ducks . On the other hand, another study revealed 80.9% of Campylobacter species in pork carcasses in Phnom Penh . 3.6.3. Clostridioides ( Cl. ) difficile One study included in our review reported the first evidence of the presence of Cl. difficile in smoked and dried freshwater fish, specifically from Battambang, Kampong Chhnang, and Kampong Cham in Cambodia. However, the samples obtained from Kampong Thom and Siem Reap provinces did not exhibit the presence of the bacteria. Out of the 25 samples collected directly from the markets in the five provinces, 4 were found to be positive for Cl. difficile and were resistant to Clindamycin upon testing. Furthermore, after undergoing molecular analysis, three out of the four positive samples revealed the presence of toxicity genes A and B; however, none of the samples exhibited the gene fraction associated with the binary toxin CDT . 3.6.4. Escherichia coli ( E. coli ) Four out of the forty-six articles of our review focused on determining the presence of E. coli in various food products such as fishery products, poultry, and pork. In total, 1327 samples were from slaughterhouses and markets located in Phnom Penh, Bantaey Meanchay, and Siem Reap provinces . These samples included fecal samples, broiler rectal swabs, carcass swabs, chicken caeca, chicken neck skins, rinse water, chopping boards, fish, and fermented fish known as Prahok. The prevalence of E. coli varied from being undetected in fermented fish to as high as 89.4% in chickens. The analytical methods used in these studies included the Afnor validation method, Biorad-Rad 07/01-07/93 and BRD 0717-12/04, methods adapted from the U.S. Food and Drug Administration (USFDA)’s BAM, and the ISO 9308-1 method for E. coli detection. 3.6.5. Salmonella spp. The prevalence of Salmonella spp. in various food sources and processing sites in Cambodia had been investigated in three studies included in the review . In a study, a total of 684 samples were collected from traditional markets across all 25 provinces of the entire Cambodia . These samples included chicken meat, cutting boards used for chicken, pork, pork carcasses, and cutting boards used for pork. The analysis conducted following the ISO6579:2002 standard revealed a prevalence of 42.6% Salmonella spp. in chicken meat, 41.9% in cutting boards used for chicken, 45.1% in pork, and 30.6% in cutting boards used for pork. Similarly, another study reported 88.2% positive findings of Salmonella spp. in pork carcass samples collected from the markets in Phnom Penh . On the other hand, a study focused on the prevalence of Salmonella spp. in a specific fishery product called Prahok at the processing sites located in Siem Reap . They collected 28 samples and analyzed them using the BAM method of USFDA and reported a prevalence of 3.5% Salmonella spp. in fermented fish. These findings highlighted the presence of Salmonella in various food sources and processing sites in Cambodia, emphasizing the need for appropriate food safety measures to prevent the transmission of this pathogen to consumers. According to the microbiological criteria of the EU, Salmonella must be absent in 25 g samples, in accordance with the sampling requirements laid down in the regulation. . Salmonella ( S. ) enterica In the review, three studies revealed the prevalence of S. enterica in poultry, pork, and fishery products . In total, 1299 samples were collected from fish and fishery products, poultry products, and pork meat at the slaughterhouses and markets in Phnom Penh, Banteay Meanchey, and Siem Reap provinces. These samples included fecal samples, broiler rectal swabs, carcass swabs, chicken caeca, chicken neck skins, rinse water, chopping boards, and fish. The analytical methods were molecular identification and standard method ISO6579:2002 (E) for the detection of Salmonella in food. The prevalence of S. enterica ranged from 6% in broiler chickens to 100% in pig carcass samples at slaughterhouses. 3.6.6. Staphylococcus ( Staph .) aureus In the review, only one study examined the occurrence of Staph. aureus in various samples obtained from traditional markets across all 25 provinces in Cambodia . A total of 532 samples were gathered, including those from chicken, cutting boards used for chicken, pork, and cutting boards used for pork. The samples underwent testing for the presence/absence and quantification of coagulase-positive staphylococci (CPS) in accordance with ISO 6888-1:1999. The prevalence of Staph. aureus was found to be 38.2% in chicken samples, 17.7% in cutting boards used for chicken, 28.9% in pork meat samples, and 11.3% in cutting boards used for pork. 3.6.7. Vibrio ( V .) Species Only one study in the review revealed the potential Vibrio risks in fermented fishery products (Prahok) . A total of 28 samples were gathered from the processing facilities in Siem Reap province and subjected to an examination to identify Vibrio species using the partial adaptation method outlined in the BAM. The two different Vibrio tests yielded conflicting outcomes regarding the presence of Vibrio spp. The CHROMagar Vibrio test by using a chromogenic medium suggested the potential existence of V. parahaemolyticus , V. vulnificus , V. cholerae , and V. alginolyticus based on distinct colony colors. However, the Thiosulfate–citrate–bile salts–sucrose (TCBS) agar test indicated a negative result for Vibrio in all the samples. Brucella spp. Out of the 12 studies reviewed for bacterial hazards and hazard indicators, only one study focused on Brucella spp. in cattle and swine . As part of an animal disease surveillance program, a total of 1141 serological samples were collected from slaughterhouses in Takeo province, Cambodia, and screened with commercial enzyme-linked immunosorbent assay (ELISA) test kits, and doubtful samples were tested by real-time Polymerase Chain Reaction (PCR). These samples included 477 from cattle and 664 from swine. The seroprevalence of Brucella spp. in cattle was found to be 0.2%, while in swine it was 0.15%. Campylobacter spp. In the review, two studies provided information on the prevalence of Campylobacter jejuni , Campylobacter coli , and Campylobacter lari in livestock and meat samples collected from various farms in Kampong Cham, Battambang, and Kampot provinces, and Phnom Penh. The 1005 samples were taken from chickens, ducks, cattle, pigs, water buffalo, quail, pigeons, geese, and pork carcasses. The studies utilized culture methods, PCR methods, and the ISO 10272-1 requirement to detect the Campylobacter spp. The PCR was more sensitive in detecting Campylobacter spp. than the culture method . Among the different livestock species, pigs exhibited the highest prevalence of Campylobacter spp., with a prevalence of 72%. This was followed by 56% in chickens and 24% in ducks . On the other hand, another study revealed 80.9% of Campylobacter species in pork carcasses in Phnom Penh . Clostridioides ( Cl. ) difficile One study included in our review reported the first evidence of the presence of Cl. difficile in smoked and dried freshwater fish, specifically from Battambang, Kampong Chhnang, and Kampong Cham in Cambodia. However, the samples obtained from Kampong Thom and Siem Reap provinces did not exhibit the presence of the bacteria. Out of the 25 samples collected directly from the markets in the five provinces, 4 were found to be positive for Cl. difficile and were resistant to Clindamycin upon testing. Furthermore, after undergoing molecular analysis, three out of the four positive samples revealed the presence of toxicity genes A and B; however, none of the samples exhibited the gene fraction associated with the binary toxin CDT . Escherichia coli ( E. coli ) Four out of the forty-six articles of our review focused on determining the presence of E. coli in various food products such as fishery products, poultry, and pork. In total, 1327 samples were from slaughterhouses and markets located in Phnom Penh, Bantaey Meanchay, and Siem Reap provinces . These samples included fecal samples, broiler rectal swabs, carcass swabs, chicken caeca, chicken neck skins, rinse water, chopping boards, fish, and fermented fish known as Prahok. The prevalence of E. coli varied from being undetected in fermented fish to as high as 89.4% in chickens. The analytical methods used in these studies included the Afnor validation method, Biorad-Rad 07/01-07/93 and BRD 0717-12/04, methods adapted from the U.S. Food and Drug Administration (USFDA)’s BAM, and the ISO 9308-1 method for E. coli detection. Salmonella spp. The prevalence of Salmonella spp. in various food sources and processing sites in Cambodia had been investigated in three studies included in the review . In a study, a total of 684 samples were collected from traditional markets across all 25 provinces of the entire Cambodia . These samples included chicken meat, cutting boards used for chicken, pork, pork carcasses, and cutting boards used for pork. The analysis conducted following the ISO6579:2002 standard revealed a prevalence of 42.6% Salmonella spp. in chicken meat, 41.9% in cutting boards used for chicken, 45.1% in pork, and 30.6% in cutting boards used for pork. Similarly, another study reported 88.2% positive findings of Salmonella spp. in pork carcass samples collected from the markets in Phnom Penh . On the other hand, a study focused on the prevalence of Salmonella spp. in a specific fishery product called Prahok at the processing sites located in Siem Reap . They collected 28 samples and analyzed them using the BAM method of USFDA and reported a prevalence of 3.5% Salmonella spp. in fermented fish. These findings highlighted the presence of Salmonella in various food sources and processing sites in Cambodia, emphasizing the need for appropriate food safety measures to prevent the transmission of this pathogen to consumers. According to the microbiological criteria of the EU, Salmonella must be absent in 25 g samples, in accordance with the sampling requirements laid down in the regulation. . Salmonella ( S. ) enterica In the review, three studies revealed the prevalence of S. enterica in poultry, pork, and fishery products . In total, 1299 samples were collected from fish and fishery products, poultry products, and pork meat at the slaughterhouses and markets in Phnom Penh, Banteay Meanchey, and Siem Reap provinces. These samples included fecal samples, broiler rectal swabs, carcass swabs, chicken caeca, chicken neck skins, rinse water, chopping boards, and fish. The analytical methods were molecular identification and standard method ISO6579:2002 (E) for the detection of Salmonella in food. The prevalence of S. enterica ranged from 6% in broiler chickens to 100% in pig carcass samples at slaughterhouses. ( S. ) enterica In the review, three studies revealed the prevalence of S. enterica in poultry, pork, and fishery products . In total, 1299 samples were collected from fish and fishery products, poultry products, and pork meat at the slaughterhouses and markets in Phnom Penh, Banteay Meanchey, and Siem Reap provinces. These samples included fecal samples, broiler rectal swabs, carcass swabs, chicken caeca, chicken neck skins, rinse water, chopping boards, and fish. The analytical methods were molecular identification and standard method ISO6579:2002 (E) for the detection of Salmonella in food. The prevalence of S. enterica ranged from 6% in broiler chickens to 100% in pig carcass samples at slaughterhouses. Staphylococcus ( Staph .) aureus In the review, only one study examined the occurrence of Staph. aureus in various samples obtained from traditional markets across all 25 provinces in Cambodia . A total of 532 samples were gathered, including those from chicken, cutting boards used for chicken, pork, and cutting boards used for pork. The samples underwent testing for the presence/absence and quantification of coagulase-positive staphylococci (CPS) in accordance with ISO 6888-1:1999. The prevalence of Staph. aureus was found to be 38.2% in chicken samples, 17.7% in cutting boards used for chicken, 28.9% in pork meat samples, and 11.3% in cutting boards used for pork. Vibrio ( V .) Species Only one study in the review revealed the potential Vibrio risks in fermented fishery products (Prahok) . A total of 28 samples were gathered from the processing facilities in Siem Reap province and subjected to an examination to identify Vibrio species using the partial adaptation method outlined in the BAM. The two different Vibrio tests yielded conflicting outcomes regarding the presence of Vibrio spp. The CHROMagar Vibrio test by using a chromogenic medium suggested the potential existence of V. parahaemolyticus , V. vulnificus , V. cholerae , and V. alginolyticus based on distinct colony colors. However, the Thiosulfate–citrate–bile salts–sucrose (TCBS) agar test indicated a negative result for Vibrio in all the samples. Six studies presented the high percentages of antibiotic resistance in E. coli , Salmonella , and Campylobacter species, as well as the emergence of extended-spectrum beta-lactamase (ESBL)-producing S. enterica in slaughterhouses, markets, and retail meats in Phnom Penh and Banteay Meanchey, Cambodia . A total of 1798 samples were collected from feces, carcasses, rectal swabs, skin, rinsed water, and chopping boards, with food source animals including fish, pigs, pork, and chicken. In addition, Cl. difficile isolated from smoked and dried freshwater fish showed their resistance to antibiotic clindamycin . A total of 23 out of 46 studies in our review revealed the presence of different types of parasites in POAOs in Cambodia. Among these, three studies highlighted the prevalence of Faciola and Sarcocystic species in cattle and buffalo, one study identified the contamination of Gnathostoma spinigerum in edible frogs, five studies demonstrated the presence of different parasites in pigs or pork, and thirteen studies concentrated on the evidence of various parasites in fish and fishery products . The parasites were identified morphologically including the use of electronic microscopy, ELISA method, and molecular methods. 3.8.1. Fasciola spp. The evidence of prevalence of Faciola spp. in cattle ranged from 5% to 20% . A total of 2391 fecal samples were collected from villages in Kampong Speu and Pursat provinces. Individual nematode egg counts were performed on the fecal samples using the quantitative McMaster method with a sensitivity of 50 eggs per gram of feces (EPG). The identification of gastrointestinal nematode genera was based on the morphological analysis of third-stage larvae sourced from the coprocultures of pooled samples. Fasiola gigantica In our review, two studies reported positive findings of Fasciola gigantica in cattle and buffaloes. The prevalence of bovine fasciolosis in Cambodia posed a risk to approximately 28% of cattle and buffaloes . The study revealed 11.4% positive results (160 out of 1046 samples) for Fasciola gigantica . The fecal samples were collected from 11 provinces, namely Kandal, Kratie, Takeo, Kampong Speu, Kampong Cham, Pursat, Battambang, Kampong Thom, Kampong Chhnang, Prey Veng, and Svay Rieng, and a modified version of the Balivet egg count technique was used for analysis. Notably, Kandal province showed the highest positive rate, reaching 56.8%. In addition, another study reported 16.37% positive results of Fasciola gigantica from 171 fecal samples collected from villages in Pursat province and analyzed by using the Modified Balivat Fasciola egg counting technique . 3.8.2. Gnathostoma spinigerum A study included in the review found out a significant proportion of edible frog ( Hoplobatrachus rugulosus ) samples obtained from the market in Phnom Penh were contaminated with a parasite known as Gnathostoma spinigerum , with a prevalence rate of 60% . However, no traces of this parasite were detected in the 10 edible frog samples collected from Takeo province, as well as in the 34 snakehead fish samples taken from the markets in Phnom Penh, Takeo, and Pursat provinces. This highlights the variation in the prevalence of Gnathostoma spinigerum among different regions and species, emphasizing the importance of monitoring and controlling the spread of this parasite to ensure food safety and public health. 3.8.3. Sarcocystis Species One of our review studies presented a 100% prevalence of Sarcocystis species, namely Sarcocystis heydorni and Sarcocystis cruzi , in the cardiac tissues of both cattle and buffaloes. Eight samples were collected from the hearts of these animals in Siem Reap province. The samples were subjected to microscopic examination and the presence of foodborne zoonotic pathogens was confirmed using molecular methods . 3.8.4. Parasites in Pig/Pork Meat The evidence of various types of parasites in pig and pork meat was examined through a review of five studies . Please see below for details. 3.8.5. Parasites in Fish and Fishery Products A total of 14 papers included in the review examined the prevalence of different parasite types in fish and fishery products . Over the past 22 years, more than 9709 samples have been analyzed to detect parasites in fishery products. These studies were conducted in 10 out of the 25 provinces in Cambodia, namely Phnom Penh, Pursat, Kampong Cham, Takeo, Kratie, Kandal, Steng Trung, Siem Reap, Kampong Thom, and Prey Veng. The samples were collected from various sources such as lakes, rivers, aquaculture sites, the sea, villages, and markets, encompassing both pre-harvest and post-harvest stages. The detection methods employed in these studies were both morphological and molecular techniques. displays the mean prevalence of various types of parasites found in fish and fishery products. The prevalence of Haplorchis pumilio was found to have the highest mean value, reaching 70%. On the other hand, Haplorchis yokogawei had the lowest mean value of prevalence, which was recorded as 15.35%. Additionally, there were other parasites present, including Pygidiopsis cambodiensis n. sp., Stellantchasmus falcatus , Gnathostoma spinigerum , Procerovum sp., Centrocestus formosanus , Artyfechinostomum malayanum , Echinostoma mekongi , and Angiostrongylus cantonensis . These hazards were categorized under “other parasites” due to the limited sample size for each parasite, making it impractical to present their individual mean values. The mean value of the prevalence of parasites in the samples collected from markets was found to be 58.85%, while the mean values for the samples taken from nature and villages were 16.46% and 38.45%, respectively. We found that the prevalence of zoonotic parasites in fishery products ranged from 0.25% to 60% in the samples taken from nature (lakes and rivers), from 10% to 90% in the samples taken from villages, and from 6.7% to 100% in the samples taken from markets. According to the one-way ANOVA test, p value 0.0067 was observed, and thus, the differences in the mean values were highly significant. Consequently, Tukey’s test was conducted to identify the specific differences between the individual mean values. The results indicated that the mean values of the “nature” and “market” samples were significantly different, whereas the mean value of the “villages” sample did not differ significantly from either the “nature” or “market” samples. The letter codes (compact letter display) show the results of Tukey post hoc multiple comparisons: bars with the same letter are not significantly different at the p = 0.05 level. Fasciola spp. The evidence of prevalence of Faciola spp. in cattle ranged from 5% to 20% . A total of 2391 fecal samples were collected from villages in Kampong Speu and Pursat provinces. Individual nematode egg counts were performed on the fecal samples using the quantitative McMaster method with a sensitivity of 50 eggs per gram of feces (EPG). The identification of gastrointestinal nematode genera was based on the morphological analysis of third-stage larvae sourced from the coprocultures of pooled samples. Fasiola gigantica In our review, two studies reported positive findings of Fasciola gigantica in cattle and buffaloes. The prevalence of bovine fasciolosis in Cambodia posed a risk to approximately 28% of cattle and buffaloes . The study revealed 11.4% positive results (160 out of 1046 samples) for Fasciola gigantica . The fecal samples were collected from 11 provinces, namely Kandal, Kratie, Takeo, Kampong Speu, Kampong Cham, Pursat, Battambang, Kampong Thom, Kampong Chhnang, Prey Veng, and Svay Rieng, and a modified version of the Balivet egg count technique was used for analysis. Notably, Kandal province showed the highest positive rate, reaching 56.8%. In addition, another study reported 16.37% positive results of Fasciola gigantica from 171 fecal samples collected from villages in Pursat province and analyzed by using the Modified Balivat Fasciola egg counting technique . gigantica In our review, two studies reported positive findings of Fasciola gigantica in cattle and buffaloes. The prevalence of bovine fasciolosis in Cambodia posed a risk to approximately 28% of cattle and buffaloes . The study revealed 11.4% positive results (160 out of 1046 samples) for Fasciola gigantica . The fecal samples were collected from 11 provinces, namely Kandal, Kratie, Takeo, Kampong Speu, Kampong Cham, Pursat, Battambang, Kampong Thom, Kampong Chhnang, Prey Veng, and Svay Rieng, and a modified version of the Balivet egg count technique was used for analysis. Notably, Kandal province showed the highest positive rate, reaching 56.8%. In addition, another study reported 16.37% positive results of Fasciola gigantica from 171 fecal samples collected from villages in Pursat province and analyzed by using the Modified Balivat Fasciola egg counting technique . Gnathostoma spinigerum A study included in the review found out a significant proportion of edible frog ( Hoplobatrachus rugulosus ) samples obtained from the market in Phnom Penh were contaminated with a parasite known as Gnathostoma spinigerum , with a prevalence rate of 60% . However, no traces of this parasite were detected in the 10 edible frog samples collected from Takeo province, as well as in the 34 snakehead fish samples taken from the markets in Phnom Penh, Takeo, and Pursat provinces. This highlights the variation in the prevalence of Gnathostoma spinigerum among different regions and species, emphasizing the importance of monitoring and controlling the spread of this parasite to ensure food safety and public health. Sarcocystis Species One of our review studies presented a 100% prevalence of Sarcocystis species, namely Sarcocystis heydorni and Sarcocystis cruzi , in the cardiac tissues of both cattle and buffaloes. Eight samples were collected from the hearts of these animals in Siem Reap province. The samples were subjected to microscopic examination and the presence of foodborne zoonotic pathogens was confirmed using molecular methods . The evidence of various types of parasites in pig and pork meat was examined through a review of five studies . Please see below for details. A total of 14 papers included in the review examined the prevalence of different parasite types in fish and fishery products . Over the past 22 years, more than 9709 samples have been analyzed to detect parasites in fishery products. These studies were conducted in 10 out of the 25 provinces in Cambodia, namely Phnom Penh, Pursat, Kampong Cham, Takeo, Kratie, Kandal, Steng Trung, Siem Reap, Kampong Thom, and Prey Veng. The samples were collected from various sources such as lakes, rivers, aquaculture sites, the sea, villages, and markets, encompassing both pre-harvest and post-harvest stages. The detection methods employed in these studies were both morphological and molecular techniques. displays the mean prevalence of various types of parasites found in fish and fishery products. The prevalence of Haplorchis pumilio was found to have the highest mean value, reaching 70%. On the other hand, Haplorchis yokogawei had the lowest mean value of prevalence, which was recorded as 15.35%. Additionally, there were other parasites present, including Pygidiopsis cambodiensis n. sp., Stellantchasmus falcatus , Gnathostoma spinigerum , Procerovum sp., Centrocestus formosanus , Artyfechinostomum malayanum , Echinostoma mekongi , and Angiostrongylus cantonensis . These hazards were categorized under “other parasites” due to the limited sample size for each parasite, making it impractical to present their individual mean values. The mean value of the prevalence of parasites in the samples collected from markets was found to be 58.85%, while the mean values for the samples taken from nature and villages were 16.46% and 38.45%, respectively. We found that the prevalence of zoonotic parasites in fishery products ranged from 0.25% to 60% in the samples taken from nature (lakes and rivers), from 10% to 90% in the samples taken from villages, and from 6.7% to 100% in the samples taken from markets. According to the one-way ANOVA test, p value 0.0067 was observed, and thus, the differences in the mean values were highly significant. Consequently, Tukey’s test was conducted to identify the specific differences between the individual mean values. The results indicated that the mean values of the “nature” and “market” samples were significantly different, whereas the mean value of the “villages” sample did not differ significantly from either the “nature” or “market” samples. The letter codes (compact letter display) show the results of Tukey post hoc multiple comparisons: bars with the same letter are not significantly different at the p = 0.05 level. From 2000 to 2022, a total of five studies were carried out in Cambodia, focusing on viruses found in bats and pigs were included in the review . 3.9.1. Astrovirus In our review, we included evidence from a study that reported the prevalence of astrovirus which was found to be over 5% in bat samples collected from farms in Ratanakiri, Stung Treng, and Prey Veng provinces . In addition to fecal samples, rectal, oral, and tissue samples were also collected, and a semi-nested PCR method was used for the identification of astrovirus. 3.9.2. Nipah Virus The review included three studies that specifically examined the Nipah virus in bats . A total of 5867 samples were gathered from serum and urine utilizing serological methods. The samples were collected from roosts located in Phnom Penh, Battambang, Kampong Cham, Kandal, Prey Veng, and Siem Reap Provinces. The highest prevalence of the Nipah virus was identified in samples taken from restaurants in Kampong Cham (11.5%), whereas the samples from the natural environment in Battambang and Kandal exhibited the lowest prevalence of less than 2%. 3.9.3. Hepatitis E Virus Included in the review was a study that provided evidence of the presence of genotype 1 hepatitis E virus in fecal and serum samples obtained from pig farms in Phnom Penh . The study reported a positive finding rate of 12.15% out of the 181 samples after undergoing a molecular analysis. In our review, we included evidence from a study that reported the prevalence of astrovirus which was found to be over 5% in bat samples collected from farms in Ratanakiri, Stung Treng, and Prey Veng provinces . In addition to fecal samples, rectal, oral, and tissue samples were also collected, and a semi-nested PCR method was used for the identification of astrovirus. The review included three studies that specifically examined the Nipah virus in bats . A total of 5867 samples were gathered from serum and urine utilizing serological methods. The samples were collected from roosts located in Phnom Penh, Battambang, Kampong Cham, Kandal, Prey Veng, and Siem Reap Provinces. The highest prevalence of the Nipah virus was identified in samples taken from restaurants in Kampong Cham (11.5%), whereas the samples from the natural environment in Battambang and Kandal exhibited the lowest prevalence of less than 2%. Included in the review was a study that provided evidence of the presence of genotype 1 hepatitis E virus in fecal and serum samples obtained from pig farms in Phnom Penh . The study reported a positive finding rate of 12.15% out of the 181 samples after undergoing a molecular analysis. In the review, four studies revealed different concentrations of biogenic amines in fish and fishery products in seven provinces of Cambodia, namely Phnom Penh, Kampong Som/Sihanouk Ville, Battambang, Kampong Chhnang, Kampong Cham, Kampong Thom, Kandal, and Siem Reap . However, a study did not specify the exact location of the sampling point . A total of 100 samples were collected from natural sources (such as lakes), fishponds, processing sites, cold storage facilities, and shops. The concentration of biogenic amines was determined and confirmed using advanced techniques including High-performance liquid chromatography with fluorescence detector (HPLC-FLD), Ultra-performance liquid chromatography (UPLC), and liquid chromatography–mass spectrometry (LC-MS). Two studies reported low concentrations of histamine in freshwater fish ranging from “not detected” to 24.2 ppm . Likewise, another study reported low levels of histamine in both freshwater and marine fishes ranging from 5.32 to 9.23 ppm . On the other hand, the highest concentration of biogenic amines, particularly histamine (>500 ppm) and tyramine (>600 ppm), in different types of fermented fishery products were reported . These findings exceeded both the Cambodian National limit of 100 ppm and the European Union limit of 200 ppm . Except for fish sauce, there are no defined maximum limits (MLs) for histamine in other fishery products in Cambodia. Three studies that revealed the concentrations of paralytic shellfish toxins and tetrodotoxin in Mekong pufferfish and horseshoe crabs were included in our review . A total of 49 samples were gathered from various locations including lakes, seashores, and wet markets in Phnom Penh, Kandal, Kratie, and Sihanouk Ville. The samples were analyzed by HPLC and LC-MS. The evidence of different concentrations of tetrodotoxin and paralytic shellfish toxins in horseshoe crabs and pufferfish is presented in . The Foodborne Disease Burden Epidemiology Reference Group (FERG), established by the World Health Organization (WHO), provided its initial findings on the global and regional impact of foodborne diseases. These findings included estimates of the occurrence, mortality, and overall burden caused by different foodborne hazards. As per the regional classification by WHO, Cambodia falls under the Western Pacific Region (WPR) B . As shown in , the DALYs/population in Cambodia was calculated based on the DALYs/100,000 people in WPR B. As shown in , our review included 46 studies, with only 13% being conducted by national research institutions. Additionally, 28% of the studies were carried out by researchers from the Republic of Korea, the French Republic, and the United States of America, without any collaborations with national research institutes. The remaining 59% of the studies were executed through either bilateral or multilateral partnerships with Australia, Austria, Belgium, Japan, Thailand, Sweden, WHO, and more than one international partner. Despite the considerably huge amount of food trade volume and trade agreements, we found no bilateral studies conducted between Cambodia and its primary food trading partners, particularly Vietnam and China . From 2009 to 2018, the leading export partners for Cambodia were Vietnam, Thailand, China, Malaysia, and France. Annually, Cambodia imports around USD 1 billion in vegetables and meats, mainly sourced from Vietnam and Thailand . The review found that 50% of the studies concentrated on foodborne parasites, which is particularly relevant to Cambodia, where helminths and cestodes are the predominant causes of the burden of FBD . Conversely, research on foodborne bacteria comprised 24% of the total studies. It highlights a significant gap in this area of biological hazards as 11 studies over 22 years are insufficient, especially considering the extensive variety of foodborne bacteria found in tropical climates like that of Cambodia. We discovered only five studies pertaining to viruses, specifically astrovirus and Nipah viruses in bats, and hepatitis E virus in pigs and pork. However, these studies may not represent the prevalence in the entire country because of the fragmented nature of the studies included in the review. Our search did not yield any significant foodborne viruses, including norovirus and rotavirus in fish, nor hepatitis virus in fish and meat. Additionally, we included four studies on biogenic amines in both freshwater and marine fish, as well as processed fishery products, and three studies on paralytic shellfish toxins and tetrodotoxin in pufferfish. Our review of the literature reveals a lack of studies specifically addressing biogenic amines, particularly histamine, in fish species like anchovy under the histidine-rich family Scombridae . Instead, the existing research has predominantly concentrated on freshwater fish and groupers. Notably, one study did indicate the presence of low levels of histamine in mackerel, another species related to high histamine production. As outlined in , our review found a noticeable increase in the quantity of food safety research conducted after 2015 , suggesting a growing interest in this field of study. This increase in biological hazards over the specified timeframe is consistent with findings from another review that examined chemical and biological risks in urban agriculture and food safety systems in Global South countries. The review noted a substantial growth in the number of publications, reporting 39 articles during the decade from 2001 to 2010, and 123 articles in the subsequent decade from 2011 to 2020 . The research conducted on biological hazards in products of animal origin (POAOs) during the years 2021 and 2022 constituted 20% of the total studies conducted over a span of 22 years, and this figure is anticipated to rise further by the conclusion of 2025 as several initiatives are being implemented in Cambodia with the objective of advancing food safety . Like the patterns observed in Burkina Faso and Ethiopia , the studies were mainly concentrated in the capital city, Phnom Penh, as demonstrated in . The evidence of the largest number of studies identified in this area is likely because of the highest population density in the country and the largest economic activities. In many LMICs, the capital city is responsible for a disproportionately high percentage of animal products consumed. In contrast, only one study has been executed in each of five out of the 25 provinces. The provinces of Pailin, Oddar Meanchey, and Mondulkiri are bordered by Thailand and the Lao People’s Democratic Republic, while the other two known as Koh Kong and Kap are coastal provinces. Thus, there is a significant requirement for additional research specifically targeting hazards in imported food and marine products. The review highlighted various types of bacteria, specifically Brucella spp., Campylobacter spp., Cl. difficile , Salmonella spp., S. enterica , Staph. aureus , and Vibrio spp. which exhibited a broad host range including poultry, pigs, pork, cattle, buffalo, freshwater and marine fish, processed fishery products, edible frogs, and bats. Given the limited data on foodborne biological hazards in POAOs in Cambodia, we also incorporated hazard indicators such as E. coli as a hygiene marker, along with other biological contaminants found on chopping boards that come into direct contact with food. The detection of the presence of Staph. aureus (11.3–38.2%) and E. coli (ND–89.5%) in fishery products, poultry, and pork indicated poor hygiene and sanitation practices along the food production chain in Cambodia. A study pinpointed that cross-contamination between the chopping boards used for meat is a potential source of biological hazards in traditional markets . None of the studies that reported the prevalence of E. coli further tested the pathogenicity and toxicity. In contrast, an assessment of beef safety conducted in Egypt reported the prevalence (20–40%) of enteropathogenic and enterohaemorrhagic strains of E. coli . This study also reported a high prevalence of Staph. aureus (40–44%) and E. coli (68–80%) in imported or locally produced beef . Likewise, a high prevalence of Salmonella spp. (3.5–88.2%) was observed in fish and fishery products, pork, and cutting boards. According to the EU’s microbiological criteria, Salmonella spp. must not be detected in 25 g of certain food such as fresh poultry meat . A recent nationwide investigation in pig slaughterhouses in Thailand indicated that the non-compliance rates with the standards of the Department of Livestock Development were 22.34% for E. coli , 8.35% for Staph. aureus , and 30.10% for Salmonella spp. . In a study, the prevalence of Brucella spp. in cattle and swine samples is reported to be low at 0.15%, while the pooled prevalence of Brucellosis across Asia stands at 8% . However, the restricted number of studies complicates the ability to draw conclusive insights. We reviewed the first report on the occurrence of the pathogen Cl. difficile in ready-to-eat smoked and dried freshwater fish.. The implications of this finding emphasize the urgency for additional epidemiological studies to be conducted in this country . Another study of our review indicated a significant prevalence of Vibrio species (98.2%) in processed fishery products in Siem Reap; however, additional verification is necessary due to the variability in the results obtained from different testing methods . In our review, we found six studies that reported on antibiotic susceptibility, antimicrobial resistance (AMR), and the identification of extended-spectrum beta-lactamase (ESBL)-producing bacteria such as E. coli , Salmonella , and Campylobacter species in samples derived from poultry, fish, and pork products . Furthermore, a study has reported the identification of resistance genes in Cl. difficile that are linked to resistance against clindamycin . The prevalence of bacteria that were resistant to antimicrobials reported in these studies was alarming. The root causes of high prevalence were the absence of a strong legal framework relevant to the control of antibiotics use, weak enforcement of the responsible use of antibiotics, absence of GHP implementation, and lack of time and temperature management including inappropriate mode of transport for food products (e.g., meats were transported in open vehicles). The review included a total of 23 studies that reported the prevalence of parasites in pigs/pork, fishery products, and ruminants in Cambodia. Alongside potential parasites such as Opisthorchis and Trichinella , several other parasites such as Fasciola , Blastocystis , Ascaris , and Balantidium species were included in the review because of the involvement of food-sourcing animals in the complex life cycles of these parasites. These parasites can be transmitted to humans not only through POAOs, but also through water, vegetables, and ready-to-eat food. According to the WHO’s estimate , the highest burden of DALYs/population in Cambodia is due to helminth and cestode parasites. As shown in , the statistical analysis found that the prevalence of parasites in the samples collected from markets was significantly higher compared to those obtained from nature such as rivers and lakes. The fish sold at markets in Cambodia are mostly farmed fish and imported from the neighboring countries. The higher detection rates were very likely due to many factors such as inadequate import control and the absence of good aquaculture practice (GAqP) implementation at fish farms. Furthermore, inadequate hygiene and sanitation practices may have contributed to an increased risk of cross-contamination especially from the intermediate host of trematodes such as snails and cats. In addition, the transportation and sale of live fish at markets could be contributing to the rise in prevalence; however, more research is needed to establish concrete scientific evidence. This indicates the necessity for the fishery competent authority to enforce a control mechanism such as recommending the specific freezing or heating time and temperature to kill parasites before consumption. The evidence of parasites reported in the studies included Opisthorchis (O.) viverrini , Haplorchis yokogawai , Happlorchis pumilio , and other parasites as listed in . O. viverrini is commonly known as small liver fluke and it can pose a serious public health concern in the Southeast Asian region. The finding of O. viverini in Cambodia was expected because the Greater Mekong Sub-region (GMS) is known as a high endemic area for Opisthorchis and opistorchiasis . Not only humans, but also domestic animals can play a role in contaminating water sources with fecal eggs, which can then lead to infections in snails and fish. As a result, it is recommended that community-based surveys should be conducted in the near future to assess the prevalence of O. viverini in humans, domestic animals, and fish . Freezing treatment for parasites, as mandated by EU legislation, requires the reduction of the temperature throughout the entire product to either −20 °C or below for a minimum duration of 24 h, or to −35 °C or below for a minimum duration of 15 h, to effectively control parasites other than trematodes. The European Food Safety Authority (EFSA) cites the WHO’s findings, which state that freezing at −10 °C for a period of 5 days is sufficient to eliminate the metacercaria of Opisthorchis spp. . A study report emphasized the vulnerability of POAOs to spoilage and contamination during collection or slaughtering, despite their elevated protein levels. These difficulties are exacerbated in the absence of dependable cold chain management systems . Likewise, our review results demonstrated a markedly higher parasite prevalence in the fishery product samples obtained from markets compared to those sourced from natural environments like lakes and rivers, suggesting an increase in the detection of parasites very likely because of complex and compounded factors such as the lack of good hygiene practices (GHPs) at fish farms, insufficient import control, and likelihood of cross-contamination during transport because fish sold at markets are mostly either farmed or imported from the neighboring countries. The monitoring and surveillance of animal husbandry practices and antibiotic use in livestock in Cambodia, as well as in neighboring countries, are lacking. Antibiotic usage in food animals is often inappropriate and unregulated, with farmers prioritizing production benefits over the potential negative impacts of antibiotic use. To address this issue, a comprehensive program focusing on the responsible use of antibiotics in food animals should be implemented, starting with feed retailers and the commercial industry. All veterinary products should be labeled in Khmer to ensure proper understanding according to the requirements of the national regulations. Training programs for farmers should be conducted by an independent agency to promote knowledgeable and responsible antibiotic use . In our review, we examined five studies that concentrated on three distinct foodborne viruses: astrovirus, hepatitis E virus, and Nipah virus. These studies provided evidence regarding their prevalence in pigs, pork, and bats. However, we found no studies that documented the presence of other significant foodborne viruses, particularly norovirus, rotavirus, and hepatitis A virus, in various meat and fishery products that are known to be major sources of foodborne viruses . There is a pressing need for more comprehensive research on viruses, as the food safety management practices designed for bacteria and other microorganisms may not be effective against viruses. This necessity has become even more pronounced in the wake of the COVID-19 pandemic, highlighting the importance of understanding viruses that could impact public health. The studies included in this review revealed the range of biogenic amines in fishery products from 8.1 to 2035 ppm. One out of four studies reported high levels of two biogenic amines (histamine > 500 ppm and tyramine > 600 ppm) which were above the national and EU limits for histamine in fishery products . In our review, three out of four biogenic amine studies have investigated the histamine content in freshwater fish . Even though freshwater fish are not typically classified as high histamine producers, it is vital to assess hygiene measures and the management of time and temperature throughout the production chain. Furthermore, due to the popularity of anchovy fish sauce in Cambodia, it is imperative that additional research on histamine levels in this product be conducted to protect consumer health. The production process, distribution, and domestic handling of fermented products in Cambodia should be re-evaluated to minimize the content of biogenic amines and microbiological contamination. Further research is required to establish preservation techniques that could be applied on an industrial level and small scale in Cambodia. Between 2017 and 2019, the Ministry of Health in Cambodia reported a minimum of seven cases of poisoning resulting from the consumption of freshwater pufferfish. These incidents led to the pufferfish poisoning of over 40 individuals and five fatalities . In our review, three studies reported the paralytic shellfish toxin and tetrodotoxin toxin in horseshoe crabs and pufferfish. Variations in toxin levels have been documented across various species of Cambodian freshwater pufferfish. Additional research is necessary to explore the specific types of toxins produced by each species of toxin-producing fish. Many studies have examined the pathogens found in various food products, but most of them have focused on specific pathogens, for example, Campylobacter jejuni . These studies are limited in scope and only identify a few pathogens of interest that Cambodians may be exposed to through food. However, they do not necessarily identify the exact agents responsible for foodborne illnesses across the country. For instance, a food item may contain Salmonella which is not a zoonotic Salmonella , or it is not a Salmonella that can cause diarrhea, but the illness could be caused by other potential foodborne pathogens which were not specifically studied . Notably, the review did not identify several important foodborne biological hazards in POAOs, including Norovirus, hepatitis A virus, Bacillus cereus , Shigella , Listeria monocytogenes , and toxin-producing E. coli in Cambodia. Despite the rise in research studies on foodborne hazards, there remains a significant disparity in understanding the risk profile of numerous potential foodborne hazards in Cambodia that could impact the safety of POAOs. Enhancing food safety sensitization initiatives for individuals involved in the high-risk value-chain, enforcing and implementing comprehensive regulatory frameworks, amplifying communication with consumers and other relevant value chain actors regarding safe food practices, and upgrading national quality infrastructure, including clean water availability, washing stations, cold storage facilities, and logistics management, all have the potential to enhance food control system in Cambodia . The Food Safety Law (2022) in Cambodia covers the supervision and guarantee of safety, quality, hygiene, and legality across all the phases of the food production process, as stated in Article 1. The Ministry of Commerce is empowered to take charge of and synchronize the governance, in collaboration with other ministries and institutions that are implicated in the sphere of food quality and safety, while the General Directorate of Customs and Excise (GDCE) is responsible for the control of food product import and export . The food safety challenges already identified by the findings of our reviewed studies included risky consumption patterns such as eating raw or partially cooked POAOs, lack of awareness and implementation of good practices, weak law enforcement, absence of consistent sampling and testing, limited funding, and dependency on aid agencies. In addition, the recommendations of the previous studies entailed the need to establish effective monitoring and surveillance mechanisms, build up a strong and competent food testing network, and strengthen disease prevention and control plans in animal husbandry. The findings of our review highlighted urgent requirements for effective strategies to strengthen the food control system in Cambodia. The Cambodian government is encouraged to initiate regional collaboration in food safety research and studies, alongside the legalization of a traceability system within the high-risk food production chain. Lastly, conducting a nationwide survey on food consumption is essential for the effective creation of a risk-based national food control system. In addition to fostering GHPs, it is vital to establish cold chain facilities and develop appropriate infrastructure for processing, storage, and transportation. In early 2024, the Fisheries Administration enacted two decisions to regulate VMP usage and to implement a National Residue Monitoring Plan (NRMP) for aquaculture products in Cambodia. Nevertheless, the priority for the MAFF is to control the misuse and abuse of veterinary medical products (VMPs) at the primary production level. Additionally, the GDCE in collaboration with technical ministries such as MAFF and the Ministry of Health should strengthen risk-based food import controls, particularly in relation to border trade with neighboring nations. Generally, border trade, often termed informal trade control, poses considerable challenges due to the high frequency of small food commodity movements across border checkpoints. The evidence of our review suggested that the existing data and information on biological hazards in POAOs in Cambodia are not sufficient to draw a solid conclusion for the risk profile of each biological hazard in animals and POAOs because of the limited and unproportionate geographical coverage and absence of prevalence data for the agents that contribute to the highest DALYs in WPR B, such as toxin-producing fungi, toxin-producing E. coli , Shigella , and V. cholerae . Nevertheless, it is obvious that more research studies are required to better understand the risk factors and risk pathways of the biological hazards of POAOs in order of priority. There is a need for additional systematic reviews that follow PRISMA guidelines, exploring different types of hazards in high-risk food products aside from POAOs. This should include an examination of fresh fruits and vegetables, ready-to-eat foods, and the safety of drinking water and water used in food business operations. The two limitations of this review were the possibility of selection bias because the screening of the studies and data extraction were solely carried out by the individual reviewer, and challenges in the interpretation of the results considering the heterogenicity of the findings. These limitations, however, were minimized by the contributions of the research project supervisors and external experts. Additionally, the geographical locations and names of provinces referenced in this study, particularly concerning the processed fishery products and other samples collected at markets, pertain solely to the sampling sites and do not necessarily indicate the actual production sites or sources. |
Good performance of the criteria of American College of Medical Genetics and Genomics/Association for Molecular Pathology in prediction of pathogenicity of genetic variants causing thoracic aortic aneurysms and dissections | db7cd565-ef01-4edd-bdce-ad5ebd8dcb3b | 8787943 | Pathology[mh] | There is a growing interest in better recognition of genetic factors leading to thoracic aortic aneurysms and dissections (TAAD) . Diagnosis of TAAD leads to important clinical decisions. Although aortic size remains the main criterion for prophylactic surgical intervention, recent evidence indicates that aortic dissection may occur in nondilated or mildly dilated aorta, so aortic size loses its predictive ability . Early diagnosis and subsequent prophylactic management prior to dissection significantly improves survival . Previously, we described 51 TAAD patients studied with the use of whole exome sequencing (WES) or panel analysis (TruSight One, Illumina). We have reported a significant difference in event-free survival between ‘genotype-positive’ group consisting of patients with variants considered as pathogenic/likely pathogenic (P/LP) on account of either literature reports, protein disruption, de novo occurrence, segregation analysis or strong pathogenicity predictions as compared to ‘genotype-negative’ patients . In the present work we analyzed an independent cohort of 132 patients tested in the routine clinical setting for variants in 30 genes associated with TAAD or 174 genes included in TruSight Cardio panel (Illumina). Considering the recent progress in development of free online databases collecting knowledge on human genetic variation we aimed to evaluate the performance of variant classification based on recommendations of American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG-AMP) . The evaluation was based on a comparison of the event-free survival in patients classified as genotype-positive vs. genotype-negative by the ACMG-AMP criteria or by the expert curated information from the ClinVar database. Patients and consent The study cohort was chosen from all index patients referred with the diagnosis of TAAD for clinical genetic testing from 2012 to 2019 to the Unit for Screening Studies in Inherited Cardiovascular Diseases, Institute of Cardiology and comprised 132 unrelated patients (all Caucasian). Patients with family history of aortic dissection aortic aneurysm, unexplained sudden death in first-degree relatives, early onset of aortic disease, suspected connective tissue disorders were included primarily. In all patients a three-to-four generation pedigree was drawn and the data on the presence of TAAD, and other diseases in the family were collected. Every effort was made to review medical data on deceased subjects to confirm familial form of TAAD. For patients suspected of Marfan syndrome (MFS) and their relatives revised Ghent criteria were used and a detailed questionnaire was applied to define the involvement of other systems and organs. Systemic score for each patient was calculated with web calculator http://www.marfan.org/resources/professionals/marfandx . In addition, web questionnaires were used to assess systemic features of Loeys-Dietz syndrome http://www.loeysdietz.org/ . With regard to cardiovascular system, all patients had Doppler echocardiographic study and CT scan of the entire aorta. In particular, we collected the following data: age at diagnosis of thoracic aortic aneurysm (TAA), the history of acute aortic dissection (AAD) or prophylactic thoracic aorta surgery. Indications for elective surgery relied on available guidelines and evolved with time. Recognition of aortic root dilatation was based on echocardiography with calculation of Z-score for aortic root using Web calculator http://www.marfan.org/resources/professionals/marfandx . Ascending aorta dimensions were normalized to body surface area. According to guidelines, patients were followed-up with serial examinations by two-dimensional echocardiography and/or CT scan of the aorta. Aortic events were defined as either acute aortic dissection (AAD) or first elective aortic surgery. The result of genetic testing did not affect the timing of first surgery in any patients from the study cohort. Data concerning mitral valve included presence of mitral valve prolapse (MVP) and mitral regurgitation (MR) on echocardiography. Diagnosis for MVP was based upon published criteria . Left ventricular noncompaction was diagnosed based on CMR study with the ratio of noncompacted to compacted myocardium greater than 2.3 during diastole on long-axis cine images . Hypertrophic cardiomyopathy was defined as left ventricular hypertrophy in the absence of loading conditions, sufficient to account for the observed degree of hypertrophy, with a maximal left ventricular wall thickness ≥ 15 mm in one or more myocardial segments . Familial disease was defined as the presence of > 1 patient with TAAD in the family. This study was approved by the Bioethics Committee in the Institute of Cardiology (Ref. No. 1407). All participants of the study gave an informed written consent including specific consent to genetic testing and permission to publish the results. Genetic testing Custom-designed (SeqCap, Roche) panel consisting of 30 genes related to aorthopathies and connective tissue disorders (Table ) was used for sequencing in 102 patients. A commercial panel (TruSight Cardio, Illumina) consisting of 174 genes associated with heritable cardiac disorders including 18 related to TAAD was used in 30 patients. Sequencing was performed on MiSeq Dx (Illumina). Variants assessment We analysed variants located in the coding or splicing regions within genes of interest, of frequency no greater than 0.001 for autosomal dominant and X-linked inheritance pattern or 0.01 for autosomal recessive in both gnomAD genomes and gnomAD exomes databases. Pathogenicity, including VUS status, was assessed using ClinVar database classification and according to guidelines of ACMG-AMP using ACMG Classification tool (version 9.1.2) provided by Varsome platform and described in detail: https://varsome.com/about/resources/acmg-implementation . Among other criteria, VarSome uses consensus of the following in silico pathogenicity predicting programmes with default settings: BayesDel_addAF, DANN, DEOGEN2, EIGEN, FATHMM-MKL, LIST-S2, M-CAP, MVP, MutationAssessor, MutationTaster, SIFT, PrimateAI while PhyloP100Way (conservation assessment) is used with cut offs -3.387, 3.858, 6.8, 7.2. The details of Varsome implementation of ACMG criteria are given at: https://varsome.com/about/resources/acmg-implementation/ . Variation was considered novel when absent from ClinVar and HGMD database (release 2020.3), and had no other literature reference according to VarSome database. Thus identified P/LP variants and variants of unknown significance (VUS) were confirmed by Sanger sequencing. Statistical analysis Kaplan–Meier survival curves were constructed to compare the event-free survival between probands defined as ‘genotype-positive’ and ‘genotype-negative’ using different classification tools in order to compare their performance. Online tool available at https://astatsa.com/LogRankTest/ was used. Aortic events were defined as acute aortic dissection or first planned aortic surgery. For the purpose of constructing genotype positive and genotype negative groups variants with “conflicting interpretation of pathogenicity” status in ClinVar were classified according to prevailing interpretation and in absence of such the less categorical interpretation was chosen (e.g., likely benign over benign or VUS over likely pathogenic). Finally, ‘genotype-negative’ reference group was created consisting of patients with no rare variant found or with no other variant(s) then benign or likely benign (B/LB) by ClinVar classification. One patient who had a likely benign variant according to ClinVar classified as likely pathogenic by ACMG was removed from the reference group. The study cohort was chosen from all index patients referred with the diagnosis of TAAD for clinical genetic testing from 2012 to 2019 to the Unit for Screening Studies in Inherited Cardiovascular Diseases, Institute of Cardiology and comprised 132 unrelated patients (all Caucasian). Patients with family history of aortic dissection aortic aneurysm, unexplained sudden death in first-degree relatives, early onset of aortic disease, suspected connective tissue disorders were included primarily. In all patients a three-to-four generation pedigree was drawn and the data on the presence of TAAD, and other diseases in the family were collected. Every effort was made to review medical data on deceased subjects to confirm familial form of TAAD. For patients suspected of Marfan syndrome (MFS) and their relatives revised Ghent criteria were used and a detailed questionnaire was applied to define the involvement of other systems and organs. Systemic score for each patient was calculated with web calculator http://www.marfan.org/resources/professionals/marfandx . In addition, web questionnaires were used to assess systemic features of Loeys-Dietz syndrome http://www.loeysdietz.org/ . With regard to cardiovascular system, all patients had Doppler echocardiographic study and CT scan of the entire aorta. In particular, we collected the following data: age at diagnosis of thoracic aortic aneurysm (TAA), the history of acute aortic dissection (AAD) or prophylactic thoracic aorta surgery. Indications for elective surgery relied on available guidelines and evolved with time. Recognition of aortic root dilatation was based on echocardiography with calculation of Z-score for aortic root using Web calculator http://www.marfan.org/resources/professionals/marfandx . Ascending aorta dimensions were normalized to body surface area. According to guidelines, patients were followed-up with serial examinations by two-dimensional echocardiography and/or CT scan of the aorta. Aortic events were defined as either acute aortic dissection (AAD) or first elective aortic surgery. The result of genetic testing did not affect the timing of first surgery in any patients from the study cohort. Data concerning mitral valve included presence of mitral valve prolapse (MVP) and mitral regurgitation (MR) on echocardiography. Diagnosis for MVP was based upon published criteria . Left ventricular noncompaction was diagnosed based on CMR study with the ratio of noncompacted to compacted myocardium greater than 2.3 during diastole on long-axis cine images . Hypertrophic cardiomyopathy was defined as left ventricular hypertrophy in the absence of loading conditions, sufficient to account for the observed degree of hypertrophy, with a maximal left ventricular wall thickness ≥ 15 mm in one or more myocardial segments . Familial disease was defined as the presence of > 1 patient with TAAD in the family. This study was approved by the Bioethics Committee in the Institute of Cardiology (Ref. No. 1407). All participants of the study gave an informed written consent including specific consent to genetic testing and permission to publish the results. Custom-designed (SeqCap, Roche) panel consisting of 30 genes related to aorthopathies and connective tissue disorders (Table ) was used for sequencing in 102 patients. A commercial panel (TruSight Cardio, Illumina) consisting of 174 genes associated with heritable cardiac disorders including 18 related to TAAD was used in 30 patients. Sequencing was performed on MiSeq Dx (Illumina). We analysed variants located in the coding or splicing regions within genes of interest, of frequency no greater than 0.001 for autosomal dominant and X-linked inheritance pattern or 0.01 for autosomal recessive in both gnomAD genomes and gnomAD exomes databases. Pathogenicity, including VUS status, was assessed using ClinVar database classification and according to guidelines of ACMG-AMP using ACMG Classification tool (version 9.1.2) provided by Varsome platform and described in detail: https://varsome.com/about/resources/acmg-implementation . Among other criteria, VarSome uses consensus of the following in silico pathogenicity predicting programmes with default settings: BayesDel_addAF, DANN, DEOGEN2, EIGEN, FATHMM-MKL, LIST-S2, M-CAP, MVP, MutationAssessor, MutationTaster, SIFT, PrimateAI while PhyloP100Way (conservation assessment) is used with cut offs -3.387, 3.858, 6.8, 7.2. The details of Varsome implementation of ACMG criteria are given at: https://varsome.com/about/resources/acmg-implementation/ . Variation was considered novel when absent from ClinVar and HGMD database (release 2020.3), and had no other literature reference according to VarSome database. Thus identified P/LP variants and variants of unknown significance (VUS) were confirmed by Sanger sequencing. Kaplan–Meier survival curves were constructed to compare the event-free survival between probands defined as ‘genotype-positive’ and ‘genotype-negative’ using different classification tools in order to compare their performance. Online tool available at https://astatsa.com/LogRankTest/ was used. Aortic events were defined as acute aortic dissection or first planned aortic surgery. For the purpose of constructing genotype positive and genotype negative groups variants with “conflicting interpretation of pathogenicity” status in ClinVar were classified according to prevailing interpretation and in absence of such the less categorical interpretation was chosen (e.g., likely benign over benign or VUS over likely pathogenic). Finally, ‘genotype-negative’ reference group was created consisting of patients with no rare variant found or with no other variant(s) then benign or likely benign (B/LB) by ClinVar classification. One patient who had a likely benign variant according to ClinVar classified as likely pathogenic by ACMG was removed from the reference group. Clinical findings Table shows summarized clinical characteristics of the study group. Mean age at the time of genetic inquest was 43.4 ± 11.3 years and 92 (69.7%) patients were male. Of the 132 patients, 82 had aortic events (62.2%), in 39 (29.5%) patients AAD was the first symptom of the disease at mean age of 43 years, and 43 (32.6%) had planned aortic surgery at mean age of 39 years as the first procedure. Eight patients required another surgical procedure during follow-up after the first operation for AAD, and three of them had subsequent procedure. Of the 43 patients who had a planned procedure only, 3 required another surgical treatment. AAD following any planned procedure occurred in 3 patients. The remaining 50 (38.6%) patients were diagnosed with TAA at mean age of 45.5 years, however they did not meet criteria for surgical correction. Of the 50 patients with TAA, the majority (n = 43) had aortic root dilatation with mean z score of 3.9 and dilation of both root and ascending aorta was found in 20/50 patients. In 24 patients the diagnosis of MFS was made and in 15 patients with TAAD genetic examination was performed to confirm/exclude the diagnosis of MFS. In the whole group co-existing abnormalities included: bicuspid aortic valve—28 patients, coarctation of the aorta (CoA) and patent ductus arteriosus (PDA)—1 patient, MVP with MR (n = 9), left ventricular noncompaction in one female patient with bradycardia and hypertrophic cardiomyopathy in two patients. Peripheral artery aneurysms were present in 6 (4.5%), history of stroke in 8 (6.1%), and coronary artery disease in 6 (11.7%) of patients. More than half of the whole study group 73 (55.3%) had hypertension. Familial disease was found in more than one third of TAAD patients (n = 45, 34%). Of these, AAD in first degree relative was found in 19 patients (14.4%). In addition, a family history of unexplained sudden death was present in 10 (7.6%) subjects. Variant classification by ACMG and ClinVar criteria Overall, 107 variants in 73 (55%) patients were identified. According to both ClinVar and ACMG, no compound heterozygotes or homozygotes for any pathogenic (P), likely pathogenic (LP), or VUS variant in any of the genes associated with recessive inheritance of TAAD were found. According to ClinVar, among genes associated with autosomal dominant or X-linked inheritance pattern 12 P/LP, 25 VUS, 23 B/LB and 47 variants with no record were found. According to the ClinVar classification, 12 patients (9.1%) had at least one P/LP variant, 10 patients (7.6%) had only B/LB variant(s) and 51 (38.6%) patients had only variant(s) of unknown status (either VUS or no record). Full list of identified variants is available in Additional file : Table S1. Eleven out of 12 variants classified as P/LP by ClinVar were also predicted to be P/LP by ACMG criteria and one was classified as VUS, however another variant in the same patient was classified as P/LP by ACMG alone, so a total of 12 patients had variants classified by both ClinVar and ACMG as P/LP (in one patient classifications apply to 2 different variants). Most variants (60%) classified as VUS by ClinVar were predicted to be B/LB by ACMG; 32% were predicted as VUS and 8%—P/LP. Out of variants classified as B/LB by ClinVar alone one variant (4.3%) had conflicting (LP) prediction by ACMG. Among variants with no ClinVar record as much as 51.1% were predicted to be P/LP; 29.8% were predicted VUS and 19.1%—B/LB. Based on the ACMG classification 36 patients (27.3%) had at least one P/LP variant, 14 (10.6%)—at least one VUS with no P/LP variants and 23 (17.4%)—only B/LB variant(s). The distribution of ACMG predictions depending on the ClinVar classification is shown in Fig. . Variant pathogenicity as assessed by ACMG criteria is a strong predictor of event free survival There was a highly significant difference in event-free survival when genotype positive group consisting of all patients with variants classified by ACMG as P/LP (including those listed in ClinVar) was compared to reference group of patients with no variant found or with variants classified as B/LB by both ClinVar and ACMG, p = 0.00096. As can be seen, at 50 years of age 50% of patients from the reference group underwent surgery compared to 83% in genotype-positive group (Fig. ). We noted that the observed effect was found both among patients who suffered from AAD (p = 0.02) and those who underwent prophylactic thoracic aorta surgery (p = 0.006) with no apparent difference between the two groups (p = 0.18, Additional file : Figure S1). The performance of ACMG criteria in predicating event free survival is independent from ClinVar information Since ClinVar status is included in ACMG criteria (PP5, BP6), the important question is to what extent prediction by ACMG criteria is independent from ClinVar data. In order to test this we analysed the data after dividing the ACMG positive group into those positive also according to Clinvar data (n = 12) and those positive only by ACMG but not ClinVar (n = 24). We found that the variants classified as P/LP by ACMG alone were associated with significantly shorter event-free survival compared to genotype-negative group, p = 0.0039 (Fig. a). At 50 years of age 80% of patients from genotype-positive group underwent surgery compared to 50% on the reference group. Furthermore, there was no significant difference in event-free survival between 12 patients with P/LP variants according to both ClinVar and ACMG compared to 24 patients carrying variants predicted to be P/LP by ACMG criteria alone, p = 0.90 (Fig. b). As expected, the 12 variants classified by ClinVar as P/LP were associated with significantly shorter event-free survival compared to the reference group (patients with no rare variant or variant(s) classified by ClinVar as benign/likely benign (B/LB), p = 0.023 (Fig. c). At 50 years of age 87% of patients from genotype-positive group underwent surgery. There was also no significant difference in event-free survival between variants classified as B/LB by ACMG alone and the reference group, p = 0.87 (Fig. d). Variants classified as VUS by either ACMG criteria or ClinVar do not affect event-free survival In a similar way we analysed event-free survival in patients with variants classified as VUS but we found no significant difference either between those with variants classified as VUS by ACMG alone versus reference group (p = 0.35, Fig. a), those with VUS by ACMG criteria alone compared to VUS by ClinVar (p = 0.18, Fig. b) or those with VUS by ClinVar and reference group (p = 0.65, Fig. c). Distribution of genes and variants types between P/LP, VUS and B/LB variants according to ACMG classification Out of 38 variants classified as P/LP by ACMG, 28 (73,7%) were in FBN1 gene, 2 variants (5.3%) in each of SMAD3 , TGFB2 and TGFBR1 , 1 variant (2.6%) in each of the COL3A1 , HCN4 , MYH11 and MYLK genes. 42% (16/38) variants were predicted to cause loss of function (LOF, nonsense, frameshift, splice site variants). 6 (26.1%) variants classified as VUS were in FBN2 , 3 (13%) in MYH11 , 2 (8.7%) in ACTA2 and MYLK , 1 (4.3%) in each of COL1A1 , COL5A1 , FLNA , HCN4 , NOTCH1 , NUP43 , TGFB2 , TGFBR2 , ELN and COL3A1 . Twenty two percent (5/23) variants were predicted to cause LOF. Most abundant among B/LB variants were variants in COL5A1 gene—9 (19.6%); 8 (13.4%) in NOTCH1 , 6 (13%) in ELN , 4 (8.7%) in HCN4 and MYH11 , 3 (6.5%) in FLNA , 2 (4.3%)— TGFB3 and 1 (2.2%) in each of ACTA2 , COL1A1 , FBN1 and MFAP5 . Fifteen percent (7/46) variants were predicted to cause LOF. We noted that among P/LP variants according to ACMG the LOF variants were associated with shorter event free survival (p = 0.04, Additional file : Figure S2). Table shows summarized clinical characteristics of the study group. Mean age at the time of genetic inquest was 43.4 ± 11.3 years and 92 (69.7%) patients were male. Of the 132 patients, 82 had aortic events (62.2%), in 39 (29.5%) patients AAD was the first symptom of the disease at mean age of 43 years, and 43 (32.6%) had planned aortic surgery at mean age of 39 years as the first procedure. Eight patients required another surgical procedure during follow-up after the first operation for AAD, and three of them had subsequent procedure. Of the 43 patients who had a planned procedure only, 3 required another surgical treatment. AAD following any planned procedure occurred in 3 patients. The remaining 50 (38.6%) patients were diagnosed with TAA at mean age of 45.5 years, however they did not meet criteria for surgical correction. Of the 50 patients with TAA, the majority (n = 43) had aortic root dilatation with mean z score of 3.9 and dilation of both root and ascending aorta was found in 20/50 patients. In 24 patients the diagnosis of MFS was made and in 15 patients with TAAD genetic examination was performed to confirm/exclude the diagnosis of MFS. In the whole group co-existing abnormalities included: bicuspid aortic valve—28 patients, coarctation of the aorta (CoA) and patent ductus arteriosus (PDA)—1 patient, MVP with MR (n = 9), left ventricular noncompaction in one female patient with bradycardia and hypertrophic cardiomyopathy in two patients. Peripheral artery aneurysms were present in 6 (4.5%), history of stroke in 8 (6.1%), and coronary artery disease in 6 (11.7%) of patients. More than half of the whole study group 73 (55.3%) had hypertension. Familial disease was found in more than one third of TAAD patients (n = 45, 34%). Of these, AAD in first degree relative was found in 19 patients (14.4%). In addition, a family history of unexplained sudden death was present in 10 (7.6%) subjects. Overall, 107 variants in 73 (55%) patients were identified. According to both ClinVar and ACMG, no compound heterozygotes or homozygotes for any pathogenic (P), likely pathogenic (LP), or VUS variant in any of the genes associated with recessive inheritance of TAAD were found. According to ClinVar, among genes associated with autosomal dominant or X-linked inheritance pattern 12 P/LP, 25 VUS, 23 B/LB and 47 variants with no record were found. According to the ClinVar classification, 12 patients (9.1%) had at least one P/LP variant, 10 patients (7.6%) had only B/LB variant(s) and 51 (38.6%) patients had only variant(s) of unknown status (either VUS or no record). Full list of identified variants is available in Additional file : Table S1. Eleven out of 12 variants classified as P/LP by ClinVar were also predicted to be P/LP by ACMG criteria and one was classified as VUS, however another variant in the same patient was classified as P/LP by ACMG alone, so a total of 12 patients had variants classified by both ClinVar and ACMG as P/LP (in one patient classifications apply to 2 different variants). Most variants (60%) classified as VUS by ClinVar were predicted to be B/LB by ACMG; 32% were predicted as VUS and 8%—P/LP. Out of variants classified as B/LB by ClinVar alone one variant (4.3%) had conflicting (LP) prediction by ACMG. Among variants with no ClinVar record as much as 51.1% were predicted to be P/LP; 29.8% were predicted VUS and 19.1%—B/LB. Based on the ACMG classification 36 patients (27.3%) had at least one P/LP variant, 14 (10.6%)—at least one VUS with no P/LP variants and 23 (17.4%)—only B/LB variant(s). The distribution of ACMG predictions depending on the ClinVar classification is shown in Fig. . There was a highly significant difference in event-free survival when genotype positive group consisting of all patients with variants classified by ACMG as P/LP (including those listed in ClinVar) was compared to reference group of patients with no variant found or with variants classified as B/LB by both ClinVar and ACMG, p = 0.00096. As can be seen, at 50 years of age 50% of patients from the reference group underwent surgery compared to 83% in genotype-positive group (Fig. ). We noted that the observed effect was found both among patients who suffered from AAD (p = 0.02) and those who underwent prophylactic thoracic aorta surgery (p = 0.006) with no apparent difference between the two groups (p = 0.18, Additional file : Figure S1). Since ClinVar status is included in ACMG criteria (PP5, BP6), the important question is to what extent prediction by ACMG criteria is independent from ClinVar data. In order to test this we analysed the data after dividing the ACMG positive group into those positive also according to Clinvar data (n = 12) and those positive only by ACMG but not ClinVar (n = 24). We found that the variants classified as P/LP by ACMG alone were associated with significantly shorter event-free survival compared to genotype-negative group, p = 0.0039 (Fig. a). At 50 years of age 80% of patients from genotype-positive group underwent surgery compared to 50% on the reference group. Furthermore, there was no significant difference in event-free survival between 12 patients with P/LP variants according to both ClinVar and ACMG compared to 24 patients carrying variants predicted to be P/LP by ACMG criteria alone, p = 0.90 (Fig. b). As expected, the 12 variants classified by ClinVar as P/LP were associated with significantly shorter event-free survival compared to the reference group (patients with no rare variant or variant(s) classified by ClinVar as benign/likely benign (B/LB), p = 0.023 (Fig. c). At 50 years of age 87% of patients from genotype-positive group underwent surgery. There was also no significant difference in event-free survival between variants classified as B/LB by ACMG alone and the reference group, p = 0.87 (Fig. d). In a similar way we analysed event-free survival in patients with variants classified as VUS but we found no significant difference either between those with variants classified as VUS by ACMG alone versus reference group (p = 0.35, Fig. a), those with VUS by ACMG criteria alone compared to VUS by ClinVar (p = 0.18, Fig. b) or those with VUS by ClinVar and reference group (p = 0.65, Fig. c). Out of 38 variants classified as P/LP by ACMG, 28 (73,7%) were in FBN1 gene, 2 variants (5.3%) in each of SMAD3 , TGFB2 and TGFBR1 , 1 variant (2.6%) in each of the COL3A1 , HCN4 , MYH11 and MYLK genes. 42% (16/38) variants were predicted to cause loss of function (LOF, nonsense, frameshift, splice site variants). 6 (26.1%) variants classified as VUS were in FBN2 , 3 (13%) in MYH11 , 2 (8.7%) in ACTA2 and MYLK , 1 (4.3%) in each of COL1A1 , COL5A1 , FLNA , HCN4 , NOTCH1 , NUP43 , TGFB2 , TGFBR2 , ELN and COL3A1 . Twenty two percent (5/23) variants were predicted to cause LOF. Most abundant among B/LB variants were variants in COL5A1 gene—9 (19.6%); 8 (13.4%) in NOTCH1 , 6 (13%) in ELN , 4 (8.7%) in HCN4 and MYH11 , 3 (6.5%) in FLNA , 2 (4.3%)— TGFB3 and 1 (2.2%) in each of ACTA2 , COL1A1 , FBN1 and MFAP5 . Fifteen percent (7/46) variants were predicted to cause LOF. We noted that among P/LP variants according to ACMG the LOF variants were associated with shorter event free survival (p = 0.04, Additional file : Figure S2). ClinVar being an expert knowledge-based database enables only the classification of variants with the existing evidence regarding observed health status. Novel variants and those lacking sufficient evidence add to the growing burden of VUS. In silico algorithms add valuable supporting evidence regarding pathogenicity of given variant. The development of ACMG-AMP criteria which combine both lines of evidence with many levels of supporting information into an integrated prediction was a cornerstone in unifying interpretation of clinical significance of genetic variants. Since their initial development the criteria were evaluated by multiple laboratories, several areas for improvement were identified and there is an agreement on the need of further validation . In the present study we have tested and found good performance of an online tool available at varsome.com which allows classification based on ACMG-AMP criteria. Although the tool doesn’t cover all the criteria and is the subject to constant change in the algorithms implementing the ACMG-AMP criteria, it is free and convenient to use ( https://varsome.com/about/resources/acmg-implementation/ ). The constantly improving functionalities of the AMG-AMP tool result in the growing number of more definitive predictions (B, LB, LP and P) and decreasing VUS classifications. Indeed, after application of ACMG-AMP criteria out of 72 variants having no ClinVar record or classified as VUS only 22 variants kept their VUS status (26 were classified as P/LP and 24 as B/LB). The goal of our study was to test the accuracy of these predictions using clinical data and ClinVar database as a reference. Similar to our previous findings , patients carrying known pathogenic variants (P/LP by ClinVar), had AAD or were referred for elective aortic surgery significantly earlier than patients in whom no rare variants in genes of interest or only variants classified as B/LB by ClinVar were identified. Importantly, the genotype-positive group according to the ACMG verdict alone had similar (significant) difference in the event-free survival vs. the reference group as the group defined by ClinVar alone. Furthermore, there was no significant difference in event-free survival when we compared genotype-positive group according to ClinVar to patients carrying variants predicted to be P/LP by ACMG criteria alone. In the similar way, there was no significant difference between patients carrying only B/LB variants according to ACMG alone vs. the reference group. These findings indicate that ACMG predictions in regard to TAAD-susceptibility genes are of similar high clinical significance to information provided by ClinVar. We also observed that among variants classified as pathogenic, those predicted to cause LOF were associated with higher severity then the remaining variants. Thus, variant type (LOF vs. non LOF) could provide additional clinically useful information. It is likely that in the future even better predictions will be possible based on knowledge collected for individual variants in sufficiently powered studies. Interestingly, in our cohort of TAAD patients there was no significant difference in disease progression between patients carrying VUS by either ACMG or ClinVar vs. the reference group suggesting that variants lacking any sort of evidence for being P/LP are more likely to be B/LB, at least regarding being the cause of monogenic disease. However, it should be noted that this does not preclude their role as risk factors in multifactorial form of disorder. Interestingly, our data are consistent with the findings from the study of 1005 patients diagnosed with nonischemic dilated cardiomyopathy which analysed prognostic impact of disease-causing variants classified based on the ACMG-AMP criteria. The rate of major adverse cardiovascular events, end stage heart failure and malignant ventricular arrhythmia in 10-year follow-up was significantly higher in the genotype-positive group compared to the genotype-negative but showed no trend towards higher rate of these conditions among patients carrying VUS . Our results also indicate that the TAAD specific panel performs similar to the larger universal cardiac panel. Out of 22 variants identified using TAAD-specific panel and predicted to be P/LP by ACMG and/or VarSome only one variant (in the HCN4 gene) wouldn’t have been found if universal cardiac panel was used. This observation supports previous reports suggesting that use of smaller gene panels consisting of the most frequently mutated genes may be a cost-effective strategy as a first step of the genetic diagnosis for TAAD patients . Furthermore, based on our analysis of TAAD patients, the P/LP variants we found by either ClinVar or ACMG resided mainly within the relatively few genes ( FBN1 , SMAD3 , TGFB2 , TGFBR1 , COL3A1 , MYH11 , MYLK ) generally agreed to be “definitively” associated with hereditary TAAD and the highest risk of early fatal complications . Patients with genetic defects in TGFBR1 , TGFBR2 , SMAD3 , TGFB2 (affecting TGF-β signalling pathway), COL3A1 (extracellular matrix component), FBN1 (TGF-β signalling affecting extracellular matrix component), MYH11 , ACTA2 , MYLK and PRKG1 (structural components and modifiers of smooth muscle layer of the aorta) are at risk of dissection or rupture at aortic diameters less than 5.0 cm recommended for surgical intervention, and therefore sequencing is of key importance for decision-making regarding the time of prophylactic surgery . Gene panel containing the most established genes associated with the highest risk of early fatal complications in patients with hereditary TAAD ( ACTA1 , COL3A1 , FBN1 , MYH11 , SMAD3 , TGFB2 , TGFBR1 , TGFBR2 , MYLK ) was sufficient to identify prevailing majority of variants most likely to be causative of the disease. ACMG classification tool available at varsome.com is useful in assessing pathogenicity of novel genetic variants in TAAD patients. Variant pathogenicity as assessed by ACMG criteria is a strong predictor of earlier AAD or need for surgical intervention. Hopefully, the progress in combining empiric data with the further development of bioinformatic tools will lead to the reduction in the number of VUS allowing even more reliable predictions that could be successfully used for clinical decisions. Additional file 1: Table S1. List of variants identified in TAAD patients. Figure S1. Kaplan–Meier analysis of event free survival in TAAD probands with variants classified as: A P/LP by ACMG (ACMG P/LP DISS) vs. reference group (REF DISS)—event = dissection prior to surgical intervention, Log-Rank Chi-square 5.23, p = 0.022), B P/LP by ACMG (ACMG P/LP PLAN) vs. reference group (REF PLAN)—event = planned prophylactic surgery, Log-Rank Chi-square 7.44, p = 0.0064, C P/LP by ACMG (ACMG P/LP DISS)—event = dissection prior to surgical intervention vs. P/LP by ACMG (ACMG P/LP PLAN) - event = planned prophylactic surgery, Log-Rank Chi-square 1.83, p = 0.18. Figure S2. Kaplan–Meier analysis of event free survival in TAAD probands with LOF variants classified as P/LP by ACMG (ACMG P/LP LOF) vs. missense and small in-frame deletions (ACMG P/LP MIS), Log-Rank Chi-square 4.16, p = 0.041. |
Inpatient autopsy rate and associated factors in a Chinese megacity: a population-based retrospective cohort study | 3b368cdf-f8e1-4682-9a18-d1d82680646d | 11749027 | Forensic Medicine[mh] | Autopsy plays a fundamental role in revealing pathological findings in advanced illness and rare conditions, identifying emerging and re-merging diseases, ensuring local quality control through antemortem diagnoses and providing more accurate vital statistics. Despite the importance of the autopsy, the autopsy rate has declined globally in recent decades. The reported median rate of autopsy was 6.7% worldwide in the period of 2000–2023. The Chinese government has not provided publicly accessible data of autopsy rates. The only data available in the literature were some reports from individual institutions. The autopsy rates have sharply declined, according to the limited data of Chinese large hospitals. A review article showed that the autopsy rate in several Chinese teaching hospitals reached more than 50% in the 1950s but dropped dramatically since the 1970s. The mean autopsy rate of these hospitals during the 1950s to 1980s ranged from 10% to 20%. Between 1998 and 2008, the autopsy rates in five prominent teaching hospitals in China varied from 0.04% to 2.04%. The current available autopsy rate (1.0%) in China of a global report was obtained from these data of the five teaching hospitals in this review article, which may not be representative. The lack of attention to autopsies has already weakened the healthcare system and the society. For example, during the pandemic of COVID-19, no autopsy was performed on COVID decedents until more than 1500 deaths had occurred, which was 1 month after the first reported death. If autopsies of COVID decedents had been conducted earlier, China and other countries would have gained a better understanding of this emerging disease in its initial stage. The decision of autopsy is highly influenced by culture and policies and deserves further discussion in different countries. In China, there is no coroner system or medical examiner system similar to those in the UK or the USA. According to Chinese law, in criminal cases, an autopsy may be performed with the approval of the person in charge of the national security authority, no matter whether the family agrees or not. In non-criminal cases, if there is a disagreement between the family and medical staff over the cause of death, an autopsy should be conducted. Under this circumstance, the autopsy must be approved and signed by the deceased’s close relatives. If there is a refusal or delay in performing the autopsy, resulting in an impact on determining the cause of death, the party responsible for the refusal or delay will bear the legal consequences. As a result, doctors routinely ask the deceased’s close relatives for their consent to perform an autopsy and document their response to avoid lawbreaking. On the other hand, there is currently no requirement for hospitals or doctors to promote autopsies for quality control or medical inquiry purposes. Back in 1989, the National Hospital Management Grading Standards stipulated that the autopsy rate for tertiary hospitals should be more than 15%. However, this requirement was later abolished, as very few hospitals were able to meet this criterion. Like all cultures, Chinese culture has a unique perspective on death, shaped by influences from Taoism, Confucianism and Buddhism. Death is considered one of the biggest taboo topics among Chinese people. Although most people in mainland Chinese cities are cremated after they die, they still place great importance on whether a complete body is left after death. A national survey showed that the public expressed concerns about body disfigurement following an autopsy, and their perceptions of autopsies were often inaccurate and prejudicial. The factors that contribute to the decision on whether an autopsy will be performed have been studied in some developed Western countries but were rarely analysed in low- and middle-income countries such as China. Furthermore, some previous studies that explored the associated factors of autopsy primarily relied on bivariate analyses such as χ 2 and correlations, which may not be sufficiently comprehensive. The aims of this study were to: (1) investigate the autopsy rate of hospital deaths in Shenzhen megacity; (2) identify factors that may impact the decision to perform an autopsy in hospital deaths and thereby influence the autopsy rate in the future. In compliance with the requirements of the Shenzhen Health Development Research and Data Management Centre, all the records were anonymised before data extraction. This article adheres to the Strengthening the Reporting of Observational Studies in Epidemiology guideline. Setting Shenzhen is a city and special economic zone bordering Hong Kong to the south. It is a global centre in technology, research, manufacturing and transportation. It has a reputation as China’s Silicon Valley. With a population of more than 17 million, Shenzhen is the third most populous city by urban population in China after Shanghai and Beijing. This population number exceeds many countries in Europe or Oceania. Data source The data used in this study were obtained from the home page of the inpatient medical record dataset provided by the Shenzhen Health Development Research and Data Management Centre. Established in 2021, this organisation is affiliated to the Health Commission of Shenzhen Municipality. This dataset contains all the information on the home page of inpatient medical records in Shenzhen, and the earliest data were from 1 January 2016. Inclusion and exclusion All the inpatient deaths between 2016 and 2022 were included. After the overall and annual autopsy rates were calculated, cases with incomplete data, unclassified hospital or foreign nationality were excluded for further multivariable analysis . Measures The outcome variable for this study was whether an autopsy had been performed on each decedent (1=yes, 0=no). The explanatory variables were categorised into three groups: (1) demographic factors: gender, age, ethnicity, marital status and medical insurance; (2) clinical factors: hospital admission pathway, length of stay, department, main diagnosis at discharge and external causes of injury and poisoning; (3) hospital factors: hospital level and hospital type. Age was treated as a continuous variable, while others were categorical variables. The main diagnosis at discharge was classified based on the International Classification of Diseases, 10th Revision . External causes of injury and poisoning were separate items recorded on the home page of the inpatient medical record. The level and type of hospitals were identified using information from the official website of the Public Hygiene and Health Commission of Shenzhen Municipality. In total, there are 151 hospitals in Shenzhen, among which 18 hospitals are relatively new hospitals and have not been classified into any tier by the Health Commission of Shenzhen Municipality. Tertiary A hospitals focus on delivering specialist care for rare or complicated diseases. There are nine National Standardised Residency Training (SRT) Bases in Shenzhen, all of which are tertiary A hospitals and are academic centres. Therefore, the tertiary A hospital category was further divided into ‘national SRT base’ and ‘not national SRT base’. Statistical analysis Descriptive statistics was used to determine the autopsy rates in Shenzhen as a whole and in subgroups. Results were presented using frequencies and percentages for categorical variables and means (SD) for continuous variables. Autopsy rates were calculated for each subgroup. As the autopsy rate met the criteria for a ‘rare event’ (binary dependent variables with dozens to thousands of times fewer events than non-events), the coefficients and standard errors of logistic regression should be corrected. Therefore, we used rare events logistic regression. Univariable analyses were performed, followed by multivariable rare events logistic regression that included demographic factors, clinical factors and hospital factors. Poisson pseudo maximum likelihood (PPML) regression could also be used under this circumstance. Therefore, we used PPML to check the robustness of the results. Heteroscedasticity robust standard errors were clustered at the level of the hospital for both models. All the statistical analyses were performed using Stata software (V.SE.17.0). OR and 95% CIs were reported for the strength of the autopsy associated with independent variables in the rare events logistic regression, while the coefficient and SE were reported for the PPML regression. P values less than 0.05 were considered statistically significant. Patient and public involvement Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research. Shenzhen is a city and special economic zone bordering Hong Kong to the south. It is a global centre in technology, research, manufacturing and transportation. It has a reputation as China’s Silicon Valley. With a population of more than 17 million, Shenzhen is the third most populous city by urban population in China after Shanghai and Beijing. This population number exceeds many countries in Europe or Oceania. The data used in this study were obtained from the home page of the inpatient medical record dataset provided by the Shenzhen Health Development Research and Data Management Centre. Established in 2021, this organisation is affiliated to the Health Commission of Shenzhen Municipality. This dataset contains all the information on the home page of inpatient medical records in Shenzhen, and the earliest data were from 1 January 2016. All the inpatient deaths between 2016 and 2022 were included. After the overall and annual autopsy rates were calculated, cases with incomplete data, unclassified hospital or foreign nationality were excluded for further multivariable analysis . The outcome variable for this study was whether an autopsy had been performed on each decedent (1=yes, 0=no). The explanatory variables were categorised into three groups: (1) demographic factors: gender, age, ethnicity, marital status and medical insurance; (2) clinical factors: hospital admission pathway, length of stay, department, main diagnosis at discharge and external causes of injury and poisoning; (3) hospital factors: hospital level and hospital type. Age was treated as a continuous variable, while others were categorical variables. The main diagnosis at discharge was classified based on the International Classification of Diseases, 10th Revision . External causes of injury and poisoning were separate items recorded on the home page of the inpatient medical record. The level and type of hospitals were identified using information from the official website of the Public Hygiene and Health Commission of Shenzhen Municipality. In total, there are 151 hospitals in Shenzhen, among which 18 hospitals are relatively new hospitals and have not been classified into any tier by the Health Commission of Shenzhen Municipality. Tertiary A hospitals focus on delivering specialist care for rare or complicated diseases. There are nine National Standardised Residency Training (SRT) Bases in Shenzhen, all of which are tertiary A hospitals and are academic centres. Therefore, the tertiary A hospital category was further divided into ‘national SRT base’ and ‘not national SRT base’. Descriptive statistics was used to determine the autopsy rates in Shenzhen as a whole and in subgroups. Results were presented using frequencies and percentages for categorical variables and means (SD) for continuous variables. Autopsy rates were calculated for each subgroup. As the autopsy rate met the criteria for a ‘rare event’ (binary dependent variables with dozens to thousands of times fewer events than non-events), the coefficients and standard errors of logistic regression should be corrected. Therefore, we used rare events logistic regression. Univariable analyses were performed, followed by multivariable rare events logistic regression that included demographic factors, clinical factors and hospital factors. Poisson pseudo maximum likelihood (PPML) regression could also be used under this circumstance. Therefore, we used PPML to check the robustness of the results. Heteroscedasticity robust standard errors were clustered at the level of the hospital for both models. All the statistical analyses were performed using Stata software (V.SE.17.0). OR and 95% CIs were reported for the strength of the autopsy associated with independent variables in the rare events logistic regression, while the coefficient and SE were reported for the PPML regression. P values less than 0.05 were considered statistically significant. Patients and/or the public were not involved in the design, or conduct, or reporting or dissemination plans of this research. From January 2016 to December 2022, a total of 42 579 inpatient deaths occurred at 104 hospitals in Shenzhen, and information on autopsy status was available for 35 272 (82.8%) decedents. Of those whose autopsy status was known, 319 had undergone an autopsy, resulting in an autopsy rate of 0.9% (319/35 272). The annual number of autopsies was 39 in 2016, 30 in 2017, 40 in 2018, 55 in 2019, 45 in 2020, 52 in 2021 and 58 in 2022. The autopsy rate for each year is presented in , with a maximum of 1.13% in 2016 and a minimum of 0.76% in 2017. The missing data on autopsy status occurred primarily in 2016 (32.4%) and 2017 (26.9%) and decreased steadily to 2022 (3.5%), also shown in . The age was 46.7 (24.4) in the autopsied group and 63.7 (20.9) in the unautopsied group. Details on autopsy rates in different subgroups are summarised in . All the variables, except for gender, were statistically significant in univariable analyses. presents the multivariable regression results of potential factors related to autopsy decisions. In the rare events logistic regression model, the autopsy decision was significantly and positively associated with being married (OR=1.60, 95% CI: 1.16 to 2.21), self-paying (OR=1.56, 95% CI: 1.07 to 2.26), death due to external causes of injury and poisoning (OR=1.69, 95% CI: 1.02 to 2.81) and pregnancy (OR=13.58, 95% CI: 4.94 to 37.36), but negatively associated with age (OR=0.97, 95% CI: 0.96 to 0.98), emergency admission (OR=0.66, 95% CI: 0.49 to 0.88), referral (OR=0.47, 95% CI: 0.25 to 0.88), neoplasms (OR=0.35, 95% CI: 0.22 to 0.56), respiratory diseases (OR=0.49, 95% CI: 0.26 to 0.95) and for-profit hospitals (OR=0.45, 95% CI: 0.23 to 0.91). There were no statistically significant differences in autopsy rates between large teaching hospitals and other hospitals. The PPML regression yielded similar results , indicating that our findings were robust. In this population-based retrospective observational study, we used data from the Shenzhen Health Development Research and Data Management Centre for the years 2016–2022 to investigate the autopsy rate and associated factors in the third megacity of China. To our knowledge, this study presents for the first time the prevalence and characteristics of regional inpatient autopsy in China. Three major findings were revealed in this retrospective study. First, the autopsy rate of hospital deaths was extremely low (0.9%). Second, some demographic and clinical characteristics, including age, marital status, type of medical insurance, hospital admission pathway, death due to external causes of injury and poisoning and certain diagnoses, were associated with the decision of autopsy. Third, the large teaching hospitals do not request more autopsies compared with primary or secondary hospitals after controlling for the patients’ characteristics. The overall inpatient autopsy rate in Shenzhen is lower than that of many developed countries. In the USA, for example, the autopsy rate of hospital deaths was 3.1% in 2020. Higher rates were reported in the UK with 10.1% in 2019 and 6.9% in Nordic countries. Despite the COVID-19 pandemic, the inpatient autopsy rate in Shenzhen did not change much in the 7 years observed. Each year, fewer than 60 autopsies were performed in hospital deaths in this megacity. With such a low number of inpatient autopsies, it is difficult to learn from the dead for clinical, education, research and public health purposes. The vanishing inpatient autopsy contributes to a reduction in pathology expertise in clinical autopsy and on-site autopsy facilities, which further hinders the request for autopsies and perpetuates a vicious cycle. Many hospitals in Shenzhen do not have on-site autopsy capabilities, and medicolegal autopsies are sent to regional autopsy centres. Some demographic characteristics are significantly related to autopsy decision. In accordance with other research, autopsy rates decline with the age of decedents. The death of a child or young adult is often considered unnatural, and the investigation of death in babies may benefit the counselling parents on future pregnancies. There is also a higher rate of autopsy in married decedents. In mainland China, marriage is found to be associated with lower mortality rates and better health. In non-criminal cases, the consent of the deceased’s close relatives is the prerequisite for an autopsy. The spouse is typically the most important person to make the autopsy decision. The primary reason the family members consent to an autopsy is to find out the cause of death. The spouse may have more motivation to ask for an autopsy, either for spiritual comfort by understanding the exact cause of death or for the need for evidence that could be crucial in potential medical litigation. The cost of an autopsy is regarded as a barrier. However, our study found that self-paying patients were more likely to undergo an autopsy. One possible explanation is that the autopsy of self-paying decedents is more for litigation purpose, so that the autopsy will result in considerable compensation to alleviate financial burdens on their families. Clinical characteristics also influence the likelihood of undergoing an autopsy. Deaths due to external causes of injury and poisoning, which accounted for 25% (78/318) of the total number of autopsies, were more likely to have an autopsy. In cases of death resulting from external causes such as violent injuries, car accidents or suspicious poisoning, the incident is likely to be treated as a potential criminal case. The national security authority will determine whether an autopsy is necessary based on the circumstances. The family cannot prevent the autopsy if it is deemed essential for the investigation. In contrast, neoplasm diagnoses were associated with a lower likelihood of autopsy since the cause of death is supposedly known. In China, maternal mortality is closely monitored and investigated more thoroughly. Hospitals are required to report the case to the designated healthcare institutions within 2 hours of a maternal death, and subsequent detailed information of the death case is also required for the national online reporting information system, which prompts an autopsy request. Though in univariable analyses, shorter hospital stays before death, emergency admission and referral were associated with higher autopsy rates, they were not significantly or even negatively associated with autopsy when controlling for other covariates, suggesting that there is no additional interest in exploring the cause of death in unexpected, severe or urgent cases. In addition to patient-related characteristics, institutional factors may also influence an individual’s likelihood of autopsy uptake. There is a large variation in the autopsy rates among hospitals in the USA, and larger hospitals and teaching hospitals were associated with higher autopsy rates. However, after controlling for patient-related characteristics, there was no longer a statistically significant difference between the best Tertiary A hospitals and other hospitals in Shenzhen. The patients in secondary and primary hospitals often have common and mild diseases. The unexpected death in secondary and primary hospitals may be more related to malpractice, which could lead to a request of autopsy. In fact, several best Tertiary A hospitals in Shenzhen have the necessary facilities and qualified pathologists to perform autopsies. It is a pity that the best Tertiary A hospitals in Shenzhen do not show greater enthusiasm for performing autopsies to improve the quality of diagnosis and treatment than secondary and primary hospitals. The for-profit hospitals were associated with lower autopsy rate, as in China they primarily provide medical services not commonly provided by public hospitals such as rehabilitation and hospice care. It is imperative and urgent for China to emphasise the importance of autopsy and increase the autopsy rate. In some countries, clinical autopsy is not routinely performed due to perceived religious and cultural resistance. However, the autopsy rate reached more than 50% in some Chinese teaching hospitals in the 1950s, and the high autopsy rate was mainly attributed to the efforts the hospitals and government made to educate the public and promote autopsies. Our study also indicates that family members would accept autopsy if it is deemed beneficial. There are insufficient initiatives among clinicians to facilitate autopsies. The litigation concern was prominent among healthcare workers towards autopsy. The physician–patient relations have been quite tense in China in recent years. Physicians’ distrust of patients and their families leads to fear and self-protections, which hinders transparent communications and medical error disclosure. The extremely low autopsy rate in China is more due to healthcare providers’ side. More efforts are urged to encourage hospitals and healthcare providers to proactively request autopsies, helping to revive this important procedure. Multi-faceted actions must be launched to increase the autopsy rate in China urgently. At the country level, the number of clinical and legal autopsies should be included in China’s national health statistics as routine indicators to reflect the true state of autopsies. Acknowledging the reality of the extremely low clinical autopsy rate is the first step toward improvement. At the regional level, in addition to making better use of existing regional autopsy centres to facilitate clinical autopsies in non-academic hospitals, local health authorities should monitor the number of each type of autopsy, provide financial support for clinical autopsies and offer feedback to hospitals. At the hospital level, an institutional Office of Decedent Affairs may function as a direct communication link between pathology and decedents’ families, providing autopsy-related discussions and bereavement counselling. The value of clinical autopsies on medical advancement and quality control should be emphasised to all health professionals, from undergraduate education to continuing education. Residents should be trained on structured conversations with relatives regarding autopsies, and cooperation should be more intensified between clinicians and pathologists. Last but not least, a patient safety culture should be fostered to alleviate concerns about the potential punishment for errors found by autopsies, as a blame culture is still pervasive in the Chinese health system. Limitations There are some limitations of this study. First, the autopsy status was missing in about 17.2% of cases, which may cause some biases in our analysis. The missing data of autopsy status occurred primarily in the first 2 years, and then decreased steadily throughout the remaining years (3.5% in the last year). The overall rate of missing autopsy status is comparable to a previous study in the USA, in which a 13.4% missing rate was reported. Second, we were unable to collect the information on the motivation for each autopsy. The autopsy studied in our research included those performed at the request of clinicians for clinical concerns or at the request of family members for medicolegal purposes, as well as forensic autopsies aimed at investigating homicidal or other violent deaths. Considering the number of cases with external causes of injury and poisoning, the actual rate of clinical autopsy is even lower. Third, the variables such as educational background and occupation of the deceased individuals were not analysed. More than 50% of these two potential covariates were missing. Therefore, these two variables were not included in the regression models. Lastly, we only analysed inpatient autopsy in Shenzhen. As a leading technology hub, Shenzhen has been at the forefront of healthcare data collection in China. The variation in autopsy rates among different geographical regions within one country could be very large, and cities with more population may have higher autopsy rates. Our descriptive and univariable results also showed that the autopsy rate of some lower-tier hospitals is lower than academic centres. There are 18 000 county hospitals and 34 000 township health centres in rural China, none of which exist in Shenzhen city. However, our results are consistent with the autopsy rate in leading teaching hospitals nationwide, implicating a good representation of major Chinese cities. The management of healthcare data in other parts of China needs to be improved, and national data regarding autopsy rates and types should be collected and summarised the sooner the better. Conclusion The autopsy rate of hospital deaths in this Chinese megacity was extremely low. Some demographic and clinical characteristics, including age, marital status, type of medical insurance, hospital admission pathway, death due to external causes of injury and poisoning and certain diagnoses, were associated with the decision to autopsy. The large teaching hospitals do not request more autopsies compared with other hospitals, after controlling for patient characteristics. The national data regarding autopsy rates and types should be collected and summarised. More efforts are urged to encourage hospitals and healthcare providers to proactively request autopsies, helping to revive this important procedure. There are some limitations of this study. First, the autopsy status was missing in about 17.2% of cases, which may cause some biases in our analysis. The missing data of autopsy status occurred primarily in the first 2 years, and then decreased steadily throughout the remaining years (3.5% in the last year). The overall rate of missing autopsy status is comparable to a previous study in the USA, in which a 13.4% missing rate was reported. Second, we were unable to collect the information on the motivation for each autopsy. The autopsy studied in our research included those performed at the request of clinicians for clinical concerns or at the request of family members for medicolegal purposes, as well as forensic autopsies aimed at investigating homicidal or other violent deaths. Considering the number of cases with external causes of injury and poisoning, the actual rate of clinical autopsy is even lower. Third, the variables such as educational background and occupation of the deceased individuals were not analysed. More than 50% of these two potential covariates were missing. Therefore, these two variables were not included in the regression models. Lastly, we only analysed inpatient autopsy in Shenzhen. As a leading technology hub, Shenzhen has been at the forefront of healthcare data collection in China. The variation in autopsy rates among different geographical regions within one country could be very large, and cities with more population may have higher autopsy rates. Our descriptive and univariable results also showed that the autopsy rate of some lower-tier hospitals is lower than academic centres. There are 18 000 county hospitals and 34 000 township health centres in rural China, none of which exist in Shenzhen city. However, our results are consistent with the autopsy rate in leading teaching hospitals nationwide, implicating a good representation of major Chinese cities. The management of healthcare data in other parts of China needs to be improved, and national data regarding autopsy rates and types should be collected and summarised the sooner the better. The autopsy rate of hospital deaths in this Chinese megacity was extremely low. Some demographic and clinical characteristics, including age, marital status, type of medical insurance, hospital admission pathway, death due to external causes of injury and poisoning and certain diagnoses, were associated with the decision to autopsy. The large teaching hospitals do not request more autopsies compared with other hospitals, after controlling for patient characteristics. The national data regarding autopsy rates and types should be collected and summarised. More efforts are urged to encourage hospitals and healthcare providers to proactively request autopsies, helping to revive this important procedure. |
Prevalence and molecular characterization of ticks and tick-borne pathogens of one-humped camels ( | ac4a6912-4d62-4ba6-ab09-ea552d342732 | 7445909 | Pathology[mh] | Ticks are responsible for substantial economic losses to farmers in livestock-keeping tropical regions of the world. Tick infestations cause wounds and inflammations due to tick bites, blood loss and potential diseases through transmission of pathogens . The tick fauna infesting livestock in Africa is diverse with species belonging to the genera Hyalomma , Rhipicephalus and Amblyomma , having the highest impact on the productivity and health of these animals . Tick-borne pathogens (TBPs) include viruses, bacteria, protozoans, and helminths afflicting humans’ and animals’ health worldwide . Complex and dynamic interactions occur inside ticks with multiple microbes ranging from pathogens to endosymbionts . The former is responsible for diseases, while the latter play a crucial role in maintaining fitness to the vector. Tick-borne rickettsioses are caused by intracellular bacteria of the genus Rickettsia . Clinical manifestations include high fever, rash, myalgia, headache and lymphadenitis . Rickettsia africae and R. aeschlimannii belong to the zoonotic spotted fever group (SFG) rickettsiae and have been reported from feeding hard ticks collected from livestock in Nigeria . Members of the genera Anaplasma and Ehrlichia (family Anaplasmataceae ) can infect both animals and humans . Limited studies have been conducted regarding the infection of camels with Anaplasmataceae . For example, Anaplasma marginale has been detected in camels using serological tests . However, other studies found no evidence of DNA of this bacterium . On the other hand, DNA of a novel species of Anaplasma , “ Candidatus Anaplasma camelii” has been confirmed by sequencing in the blood of camels in various countries . Infected animals may present clinical signs like anorexia, respiratory distress, edema of the sternum and xiphoid or even sudden death . Coxiella burnetii , the causative agent of Q fever, is a zoonotic pathogen of vertebrates which is distributed worldwide . Clinical manifestations are self-limiting febrile conditions in the majority of the cases and reproductive disorder in some animals . Interestingly, strains of Coxiella burnetii have their origin from the diverse group of Coxiella -like endosymbionts, which are descendants of a Coxiella -like progenitor hosted by ticks . Apicomplexan protozoans of the genus Babesia are transmitted by hard ticks . Dromedaries are no exception to infection with Babesia although very few published reports exist so far . The pathogenicity differs according to the Babesia species. Babesia caballi causes severe clinical disease in equines characterized by fever, anemia, hemoglobinuria, and edema in some cases , while B. occultans is of lower pathogenicity in animals as previously reported in cattle with no visible clinical signs . In camels, reported clinical signs of babesiosis includes anemia, fever, icterus, hemoglobinuria, and gastro-intestinal stasis . Current estimates in Nigeria on the one-humped camel ( Camelus dromedaries ) population are at about 283,395 heads . Pastoralists primarily keep these animals for transportation and as source of meat. The carcass yield from camels are higher under cheap management system. Recent estimates show that the consumption of camel meat in Nigeria has increased substantially due to its nutritional value and for health reasons . Camel meat has relatively less fat compared to cattle and sheep and is acclaimed to cure diseases like hypertension, hyperacidity, and cardiovascular disease . On the other hand, researchers in Nigeria consider the dromedary camel as a ‘foreign animal’ and this has led to research apathy on this animal species in recent past . As desertification continues to encroach into sub-Saharan Africa, renewed interest is also gradually building up in northern Nigeria, as the camel is resilient to the arid land conditions and seems certainly the best option to mitigate the effects of environmental conditions on livestock production among the pastoralist in northern Nigeria . In Nigeria, dromedary camels are raised in semi-arid conditions grazing on poor pastures for most of the year where they are exposed to a wide variety of vectors including ticks. This shows the need to ascertain this source of potential disease. In order to be better prepared to raise this animal species successfully without the debilitating effects of ticks and tick-borne disease on their health and productivity, this study was carried out to assess (i) the species diversity of ticks on camels, (ii) the occurrence of selected tick-borne pathogens in ticks collected and blood taken from camels and (iii) the risk factors associated with infection of camels with tick-borne pathogens in Nigeria. Study area The North-West region is a semi-arid zone and the largest region in Nigeria with a combined human population of 35,786,944 . This region has a savannah type of vegetation favorable to camel husbandry because they are easily predisposed to foot rot associated with wetland and this hence the concentration of camels at this region . The temperature ranges from 18 °C to 45 °C with a mean temperature of 27 °C. There is a single rainy season from May to October with mean annual rainfall of 508–1016 mm. Three states (Sokoto, Jigawa and Kano) were selected for sampling (Fig. ). Study design and sampling locations A cross sectional study was carried out from September to November 2017. Additional samples were collected in November 2018. Non-probability sampling, combining both convenient and snowball sampling techniques were employed. Blood and tick samples were collected from several sampling points across the three study areas comprising of abattoirs, livestock markets and herders/pastoralists. All samples from Kano ( n = 92) were collected from the Kano metropolitan abattoir (12.0123540N, 8.520795E) located in the city of Kano. For Sokoto state, all samples ( n = 55) were collected from herders/pastoralists at several locations within the state. The geographical coordinates for the state of Sokoto are 12.1358N, 4.8654E. Finally, the livestock market located in Maigatari local government of Jigawa state (12.8125483N, 9.444303E) as well as adjoining local villages within this area were used for sampling in Jigawa state ( n = 29). Sampled animals from all study areas were raised under the traditional nomadic (extensive) management system typical of camel husbandry in Africa with little access to veterinary care. Information such as age (< 5 years/> 5 years), sex (male/female) and presence/absence of ticks were collected for each animal to assess possible risk factors associated with tick-borne pathogen infection. Body condition score were classified into any of the three classes (poor, moderate and good) based on the fat storage at the back and flank region using visual inspection. All samples were collected from animals that were apparently healthy without any clinical signs of infection after seeking the owner’s consent and approval. Blood and tick sample collection About 5 ml of whole blood were collected from the jugular vein and in some cases from the lateral abdominal vein of clinically healthy animals and in the case of slaughtered animals, from severed jugular blood vessels. All collected blood samples were transferred into labelled EDTA coated tubes and transported to the laboratory on ice packs within 4 h. In the laboratory, 125 µl of blood was dispensed on the marked spot within the Classic FTA card (Whatman ® GE Healthcare, Buckinghamshire, UK). All cards were labeled, air dried and stored at room temperature for further analysis. The skin of the camels covering known predilection sites for ticks including the perineum region, abdomen and thigh, ear, neck, and dewlap were carefully examined for the presence of ticks. Ticks were collected using tweezers into labelled tubes plugged with cotton. Ticks from each animal were kept in separate tubes. The labelled tubes contained information on the identity of the animal including their location. Morphological identification of ticks All tick samples collected from infested animals were identified to species level based on standard keys using a stereomicroscope (Olympus®, Tokyo, Japan) separately by two of the co-authors . Specimens were separated based on species, life stage and sex. All tick specimens were preserved in 70% ethanol and kept at 4 °C after identification. Washing and homogenization of ticks Individual ticks were washed twice with double distilled water after the removal of ethanol in individual Eppendorf tubes as described by Silaghi et al. . A 5 mm sterile stainless-steel bead and 100 µl of sterile PBS were added to each tube. Ticks were homogenized using Tissue Lyser II (Qiagen, Hilden, Germany) for 60 s twice with 30 s break in between at an oscillation frequency of 30 Hz. After centrifugation at 2500× rpm for 3 min, the supernatant was removed. Pooling of supernatant and extraction of genomic DNA from tick homogenates and FTA cards Prior to extraction, the supernatants from ticks of the same species and the same animal were pooled with a maximum of 5 ticks per pool. A maximum of 80 µl of the homogenate (supernatant) was used for DNA extraction (individual tick contributing a maximum of 16 µl). For partially-fed ticks, supernatants were either pooled or used individually (engorged ticks were used individually). Extraction of DNA from FTA cards (blood) was performed from approximately a 6 mm punch from the dried blood spot on the card. The spot was carefully excised into a sterile 2 ml labelled Eppendorf tube containing one 4 mm sterile stainless-steel bead. The samples were then lysed using Tissue Lyser II (Qiagen) for 60 s twice. Isolation of genomic DNA was carried out with QIAamp DNA Mini Kit (Qiagen) according to manufacturer’s instruction. Genomic DNA was stored at − 20 °C until use. Tick species identification using PCR For the molecular identification of tick species, three different genes ( 12S rRNA, 16S rRNA and cox 1) were targeted using primer pairs shown in Table . Genetic identification of ticks was carried out using DNAs extracted from a single tick representative of each species. The reaction was performed in total volume of 25 μl using the GoTaq® G2 Flexi DNA Polymerase Kit (Promega, Madison, WI, USA). The PCR mix consisted of 5 μl GoTaq® 5× Flexi Buffer (green), 3 μl 25 mM MgCl2 solution, 0.5 μl 10 mM dNTPs, 1 μl of each primer (both forward and reverse) (10 µM), 0.1 μl of GoTaq® DNA Polymerase (5 u/μl), 9.4 µl nuclease-free water (NFW) and 5 µl template DNA. A thermal cycler C1000 (Bio-Rad, Munich, Germany) was used for amplification and the cycling conditions are provided in Table . Molecular detection of pathogens using PCR For pathogen detection, PCRs were used for amplification of DNA of Rickettsia spp., Anaplasma/Ehrlichia spp., A. marginale , C. burnetii and Babesia/Theileria spp. from tick DNA, while A. marginale , “ Ca. A. camelii” and Babesia/Theileria spp. were screened from blood DNA. All reactions were performed in total volume of 25 μl using the GoTaq® G2 Flexi DNA Polymerase Kit (Promega) The PCR mix contained 5 μl GoTaq® 5× Flexi Buffer (green), 3 μl 25 mM MgCl 2 solution, 0.5 μl 10 mM dNTPs, 400 nM of each primer (forward and reverse), 0.1 μl of GoTaq® DNA Polymerase (5 u/µl), 9.4 µl NFW and 5 µl of template DNA. Every reaction set had a positive and negative control (molecular grade water). Table summarizes the PCR cycling conditions. Gel electrophoresis and sequencing Agarose gel electrophoresis at a concentration of 1.5% was used for the separation of PCR products with 2 μl GelRed™ (1×; equivalent to 1 µl/10 ml) (Biotium Fremont, CA, USA). Bands were visualized using a ChemiDoc™ MP imaging system (Bio-Rad). Amplicons were purified with NucleoSEQ® columns (Macherey-Nagel, Düren, Germany), according to the manufacturer’s instructions, and then sequenced in one direction using an ABI PRISM® 3130 sequencer (Applied Biosystem, California, USA) at the Institute of Diagnostic Virology, Friedrich Loeffler Institute, Germany. The nucleotide sequences were viewed and edited using Geneious 9.1 software (Biomatters, Auckland, New Zealand) and analyzed against sequences deposited in GenBank using BLASTn (National Centre for Biotechnology Information; www.blast.ncbi.nlm.nih.gov/Blast ) for high similarity sequences. Real-time PCR for the amplification of A. marginale The msp1ß gene of A. marginale was targeted in DNA samples from ticks and camel blood using species-specific primers and probe (Table ) as previously described . The PCR was carried out using a CFX-96 Real-Time System (Bio-Rad) with the cycling conditions described in Table . PCR amplification was carried out using iTaq™ Universal Probes Supermix (Bio-Rad) in a total volume of 25 µl comprising of 200 nM of forward and reverse primers, 100 nM of probe (Table ), 12.5 µl (2×) iTaq™ Supermix, 0.9 µl RNase free water and 10 µl template DNA. Each reaction run included a positive and negative control. Statistical analysis For pooled tick samples, the prevalence was estimated using the minimum infection rate (MIR). MIR assumes that only one tick is infected in a positive pool . Results are also presented with a 95% confidence interval (CI: lower and upper) for the infection rates and MIR for the detected pathogens. MIR was expressed in simple percentages (only one tick was considered as positive, in a pool of adult ticks). The calculation was carried out thus MIR = (P/N) × 100%, where P is the number of positive pools, N is the total number of ticks tested. Chi-square test was used to test for statistical significance between the various risk factors. The odds ratio was used to test the association/likelihood of the presence of ticks and infection with Anaplasma spp. The level of significance was set as P < 0.05. Statistical significance was carried out using GraphPad Prism version 5.0 (GraphPad Software, La Jolla California USA; www.graphpad.com ). Phylogenetic analysis The nucleotide sequences were viewed and edited using Geneious 11.1.5 and analyzed against references in GenBank using BLASTn ( www.blast.ncbi.nlm.nih.gov/Blast ) for high similarity sequences to confirm identity for the ticks as well as for pathogens. Sequences were added to alignment explorer in MEGA 7 and aligned with ClustalW . Reference sequences were also added to the aligned datasets. Model test was run in MEGA 7 prior to the tree construction in order to select the suitable model. The phylogenetic tree was constructed using the Maximum Likelihood method based on the Kimura 2-parameter model with 1000 replicates. Median joining network was constructed using PopART ( http://popart.otago.ac.na ) to examine the haplotype distribution and relationships. The North-West region is a semi-arid zone and the largest region in Nigeria with a combined human population of 35,786,944 . This region has a savannah type of vegetation favorable to camel husbandry because they are easily predisposed to foot rot associated with wetland and this hence the concentration of camels at this region . The temperature ranges from 18 °C to 45 °C with a mean temperature of 27 °C. There is a single rainy season from May to October with mean annual rainfall of 508–1016 mm. Three states (Sokoto, Jigawa and Kano) were selected for sampling (Fig. ). A cross sectional study was carried out from September to November 2017. Additional samples were collected in November 2018. Non-probability sampling, combining both convenient and snowball sampling techniques were employed. Blood and tick samples were collected from several sampling points across the three study areas comprising of abattoirs, livestock markets and herders/pastoralists. All samples from Kano ( n = 92) were collected from the Kano metropolitan abattoir (12.0123540N, 8.520795E) located in the city of Kano. For Sokoto state, all samples ( n = 55) were collected from herders/pastoralists at several locations within the state. The geographical coordinates for the state of Sokoto are 12.1358N, 4.8654E. Finally, the livestock market located in Maigatari local government of Jigawa state (12.8125483N, 9.444303E) as well as adjoining local villages within this area were used for sampling in Jigawa state ( n = 29). Sampled animals from all study areas were raised under the traditional nomadic (extensive) management system typical of camel husbandry in Africa with little access to veterinary care. Information such as age (< 5 years/> 5 years), sex (male/female) and presence/absence of ticks were collected for each animal to assess possible risk factors associated with tick-borne pathogen infection. Body condition score were classified into any of the three classes (poor, moderate and good) based on the fat storage at the back and flank region using visual inspection. All samples were collected from animals that were apparently healthy without any clinical signs of infection after seeking the owner’s consent and approval. About 5 ml of whole blood were collected from the jugular vein and in some cases from the lateral abdominal vein of clinically healthy animals and in the case of slaughtered animals, from severed jugular blood vessels. All collected blood samples were transferred into labelled EDTA coated tubes and transported to the laboratory on ice packs within 4 h. In the laboratory, 125 µl of blood was dispensed on the marked spot within the Classic FTA card (Whatman ® GE Healthcare, Buckinghamshire, UK). All cards were labeled, air dried and stored at room temperature for further analysis. The skin of the camels covering known predilection sites for ticks including the perineum region, abdomen and thigh, ear, neck, and dewlap were carefully examined for the presence of ticks. Ticks were collected using tweezers into labelled tubes plugged with cotton. Ticks from each animal were kept in separate tubes. The labelled tubes contained information on the identity of the animal including their location. All tick samples collected from infested animals were identified to species level based on standard keys using a stereomicroscope (Olympus®, Tokyo, Japan) separately by two of the co-authors . Specimens were separated based on species, life stage and sex. All tick specimens were preserved in 70% ethanol and kept at 4 °C after identification. Individual ticks were washed twice with double distilled water after the removal of ethanol in individual Eppendorf tubes as described by Silaghi et al. . A 5 mm sterile stainless-steel bead and 100 µl of sterile PBS were added to each tube. Ticks were homogenized using Tissue Lyser II (Qiagen, Hilden, Germany) for 60 s twice with 30 s break in between at an oscillation frequency of 30 Hz. After centrifugation at 2500× rpm for 3 min, the supernatant was removed. Prior to extraction, the supernatants from ticks of the same species and the same animal were pooled with a maximum of 5 ticks per pool. A maximum of 80 µl of the homogenate (supernatant) was used for DNA extraction (individual tick contributing a maximum of 16 µl). For partially-fed ticks, supernatants were either pooled or used individually (engorged ticks were used individually). Extraction of DNA from FTA cards (blood) was performed from approximately a 6 mm punch from the dried blood spot on the card. The spot was carefully excised into a sterile 2 ml labelled Eppendorf tube containing one 4 mm sterile stainless-steel bead. The samples were then lysed using Tissue Lyser II (Qiagen) for 60 s twice. Isolation of genomic DNA was carried out with QIAamp DNA Mini Kit (Qiagen) according to manufacturer’s instruction. Genomic DNA was stored at − 20 °C until use. For the molecular identification of tick species, three different genes ( 12S rRNA, 16S rRNA and cox 1) were targeted using primer pairs shown in Table . Genetic identification of ticks was carried out using DNAs extracted from a single tick representative of each species. The reaction was performed in total volume of 25 μl using the GoTaq® G2 Flexi DNA Polymerase Kit (Promega, Madison, WI, USA). The PCR mix consisted of 5 μl GoTaq® 5× Flexi Buffer (green), 3 μl 25 mM MgCl2 solution, 0.5 μl 10 mM dNTPs, 1 μl of each primer (both forward and reverse) (10 µM), 0.1 μl of GoTaq® DNA Polymerase (5 u/μl), 9.4 µl nuclease-free water (NFW) and 5 µl template DNA. A thermal cycler C1000 (Bio-Rad, Munich, Germany) was used for amplification and the cycling conditions are provided in Table . For pathogen detection, PCRs were used for amplification of DNA of Rickettsia spp., Anaplasma/Ehrlichia spp., A. marginale , C. burnetii and Babesia/Theileria spp. from tick DNA, while A. marginale , “ Ca. A. camelii” and Babesia/Theileria spp. were screened from blood DNA. All reactions were performed in total volume of 25 μl using the GoTaq® G2 Flexi DNA Polymerase Kit (Promega) The PCR mix contained 5 μl GoTaq® 5× Flexi Buffer (green), 3 μl 25 mM MgCl 2 solution, 0.5 μl 10 mM dNTPs, 400 nM of each primer (forward and reverse), 0.1 μl of GoTaq® DNA Polymerase (5 u/µl), 9.4 µl NFW and 5 µl of template DNA. Every reaction set had a positive and negative control (molecular grade water). Table summarizes the PCR cycling conditions. Agarose gel electrophoresis at a concentration of 1.5% was used for the separation of PCR products with 2 μl GelRed™ (1×; equivalent to 1 µl/10 ml) (Biotium Fremont, CA, USA). Bands were visualized using a ChemiDoc™ MP imaging system (Bio-Rad). Amplicons were purified with NucleoSEQ® columns (Macherey-Nagel, Düren, Germany), according to the manufacturer’s instructions, and then sequenced in one direction using an ABI PRISM® 3130 sequencer (Applied Biosystem, California, USA) at the Institute of Diagnostic Virology, Friedrich Loeffler Institute, Germany. The nucleotide sequences were viewed and edited using Geneious 9.1 software (Biomatters, Auckland, New Zealand) and analyzed against sequences deposited in GenBank using BLASTn (National Centre for Biotechnology Information; www.blast.ncbi.nlm.nih.gov/Blast ) for high similarity sequences. A. marginale The msp1ß gene of A. marginale was targeted in DNA samples from ticks and camel blood using species-specific primers and probe (Table ) as previously described . The PCR was carried out using a CFX-96 Real-Time System (Bio-Rad) with the cycling conditions described in Table . PCR amplification was carried out using iTaq™ Universal Probes Supermix (Bio-Rad) in a total volume of 25 µl comprising of 200 nM of forward and reverse primers, 100 nM of probe (Table ), 12.5 µl (2×) iTaq™ Supermix, 0.9 µl RNase free water and 10 µl template DNA. Each reaction run included a positive and negative control. For pooled tick samples, the prevalence was estimated using the minimum infection rate (MIR). MIR assumes that only one tick is infected in a positive pool . Results are also presented with a 95% confidence interval (CI: lower and upper) for the infection rates and MIR for the detected pathogens. MIR was expressed in simple percentages (only one tick was considered as positive, in a pool of adult ticks). The calculation was carried out thus MIR = (P/N) × 100%, where P is the number of positive pools, N is the total number of ticks tested. Chi-square test was used to test for statistical significance between the various risk factors. The odds ratio was used to test the association/likelihood of the presence of ticks and infection with Anaplasma spp. The level of significance was set as P < 0.05. Statistical significance was carried out using GraphPad Prism version 5.0 (GraphPad Software, La Jolla California USA; www.graphpad.com ). The nucleotide sequences were viewed and edited using Geneious 11.1.5 and analyzed against references in GenBank using BLASTn ( www.blast.ncbi.nlm.nih.gov/Blast ) for high similarity sequences to confirm identity for the ticks as well as for pathogens. Sequences were added to alignment explorer in MEGA 7 and aligned with ClustalW . Reference sequences were also added to the aligned datasets. Model test was run in MEGA 7 prior to the tree construction in order to select the suitable model. The phylogenetic tree was constructed using the Maximum Likelihood method based on the Kimura 2-parameter model with 1000 replicates. Median joining network was constructed using PopART ( http://popart.otago.ac.na ) to examine the haplotype distribution and relationships. Morphological and molecular identification of tick species Of the 176 camels examined, 92 (52.3%) were infested with ticks from a total collection of 593. All ticks collected were identified as adult with no immature stages comprising of 440 (74.2%) males and 153 (25.8%) females. The largest number of ticks was collected from Kano (396; 66.8%) followed by Jigawa (145; 24.5%) and Sokoto (52; 8.8%) state (Table ). Altogether, 7 species of ticks were identified: Hyalomma dromedarii ( n = 465; 78.4%), H. truncatum ( n = 87; 14.7%), H. rufipes ( n = 19; 3.2%), H. impressum ( n = 2; 0.3%), H. impeltatum ( n = 18; 3.0%), Amblyomma variegatum ( n = 1; 0.2%) and Rhipicephalus evertsi evertsi ( n = 1; 0.2%) (Table ). Three tick species ( H. dromedarii , H. truncatum and H. rufipes ) were found in all three locations while H. impeltatum was found only in Kano and Sokoto. Amblyomma variegatum was collected in Kano state and R. e. evertsi in Jigawa state only and lastly, H. impressum in Kano state (Table ). To confirm the identity of these species of ticks, molecular identification was carried out. A BLASTn query of the obtained sequences for the 16S rRNA gene revealed a high identity match ranging from 98.9% to 100% for all tick species except for H. rufipes . Due to ambiguity of the 16S rRNA gene for H. rufipes , the 12S rRNA gene was amplified and a BLASTn query of the obtained sequences still could not clear this ambiguity. Lastly, we amplified the cox 1 gene followed by sequencing. The sequence analysis gave 99.8% homology with H. rufipes (GenBank: KX000641.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN394427-MN39444 ( 16S rRNA gene), MN394457-MN394461 ( 12S rRNA gene) and MN601291-MN601294 ( cox 1). Risk factors associated with tick infestations of camels Tick infestation rates were slightly higher in male camels across the three different locations with 65.6% (40 /61) in Kano, 63.2% (12/19) in Jigawa and 27.8% (10/36) in Sokoto compared with female camels with 64.5% (20/31), 60.0% (6/10) and 21.1% (4/19), respectively. No significant difference was observed between sexes across the study locations ( P > 0.05) (Table ). The infestation rate was significantly higher in camels > 5 years-old compared with those < 5 years-old and differed significantly ( P < 0.05) (Table ). The odds of infestation with ticks were higher in camels > 5 years-old from Kano (OR: 1.44, 95% CI: 0.49–4.18) compared with Jigawa (OR: 0.76, 95% CI: 0.15–4.30) and Sokoto states (OR: 0.49, 95% CI: 0.14–1.73) (Table ). Finally, camels with a good body condition score were significantly less infested with ticks across the three study locations compared with those with either poor or moderate body condition score ( P < 0.05) (Table ). Molecular detection of tick-borne pathogens in ticks Rickettsia spp. Altogether, 67 out of 231 tick pools (comprising of 593 ticks) produced bands of the correct length in the gltA PCR and all were sequenced. Out of those, 51 resulted in good quality sequences and could be evaluated as Rickettsia spp. Therefore, the minimum infection rate (MIR) for tick pools for Rickettsia spp. was 8.6%. Across the study locations, 31 pools were positive (MIR 7.8%) in Kano state, 14 pools (MIR 9.7%) in Jigawa state and 6 pools (MIR 11.5%) in Sokoto state (Table ). No significant difference ( P > 0.05) was observed between the different study locations. Rickettsia spp. was detected in four different tick species with H. rufipes having the highest MIR (36.8%) and H. impeltatum with the lowest MIR (5.6%). Others include H. truncatum (16.1%) and H. dromedarii (6.2%) (Table ). Following a BLASTn query on the NCBI database, 45 of these sequences showed 100% identity with R. aeschlimannii (GenBank: MH267736.1) and 6 showed high similarity scores ranging between 98.7–99.7% (GenBank: MH267736.1). BLASTn analysis of one of the sequences obtained from the gltA gene of Rickettsia spp., gave 100% homology with C. burnetii (GenBank: CP035112.1). To further confirm the genotypes of Rickettsia spp., ompA and ompB genes were partially amplified. We tested all gltA- positive samples with good quality sequences ( n = 51) for both ompA and ompB . For ompA and ompB , 39 and 43 tick pools, respectively were positive, from which 13 and 16 amplicons were selected for sequencing and gave good quality sequences. BLASTn analysis of these sequences obtained for both genes showed 99.9–100% similarity with R. aeschlimannii on GenBank. All newly generated sequences were deposited in the GenBank database under the accession numbers MN601304-MN601344 ( gltA ), MT126809-MT126818 ( omp A) and MN601295-MN601303 ( omp B). Anaplasma / Ehrlichia spp. The PCR targeting the 16S rRNA gene of Anaplasma/Ehrlichia spp. was positive in 62 out of 231 tick pools with a MIR of 10.5%. Based on location, the MIR in Sokoto state was 15.4% followed by Kano state with 10.1% while Jigawa state had the lowest MIR of 9.7%. No significant difference ( P > 0.05) was observed across the different sampling locations (Table ). Five different tick species were positive: H. dromedarii (MIR, 8.0%), H. truncatum (MIR, 16.1%), H. rufipes (MIR, 36.8%), H. impeltatum (MIR, 11.1%) and H. impressum (MIR, 100.0%) (Table ). BLASTn analysis of sequences obtained from positive samples of the Anaplasma/Ehrlichia PCR had sizes of 240 bp with 98–100% similarity to Peptoniphilus spp. (GenBank: LC145547.1), “ Candidatus Midichloria mitochondrii” (GenBank: KU559921.11) and a Rickettsiales bacterium (GenBank: DQ379964.1). Anaplasma marginale Anaplasma marginale DNA was not detected in DNA from ticks. Coxiella burnetii The DNA of C. burnetii was detected in 17 out of 231 tick pools with a MIR of 2.9%. The MIR for the states of Sokoto, Kano and Jigawa was 3.8%, 3.0% and 2.1%, respectively (Table ). No significant difference ( P > 0.05) was observed across the different study locations. Most C. burnetii positive tick pools were H. dromedarii (MIR 3.4%) and H. truncatum (MIR 1.1%) (Table ). Only 1 out of 87 (1.1%) H. truncatum ticks in the pools were positive to C. burnetti (Table ). Sequences had similarity scores ranging between 99.2–100% to C. burnetii (GenBank: CP035112.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN396571-MN396578. Babesia spp. The MIR for Babesia spp. was 0.7% (4/593) across the study locations with 3 positives in Kano state (MIR 0.8%) and one in Jigawa state (MIR 0.7%) (Table ). No significant difference ( P > 0.05) was observed across the different sampling locations. Three out of the four positive pools were detected in H. dromedarii (MIR 0.6%) and one was detected in H. impeltatum (MIR 5.6%) (Table ). BLASTn analysis showed that 2 sequences showed 100% identity to B. occultans (GenBank: MG920540.1), 1 with 100% identity to B. caballi (GenBank: MG052892.1) and 1 showed 98.5% similarity to Babesia spp. (GenBank: KC249945.1). A further attempt to characterize the undifferentiated species of Babesia using a different primer pair showed 100% homology with Babesia spp. (GenBank: KC249946.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN394378-MN394381. Co-detection of tick-borne pathogens in ticks A low co-detection rate was observed in the study with all co-detections occurring in Kano state only. Co-detection was observed for Rickettsia spp. + Babesia spp. in one tick pool as well as for Rickettsia spp. + C. burnetii in another tick pool. Molecular detection of tick-borne pathogens in the blood of camels “Candidatus Anaplasma camelii” The overall prevalence of “ Ca . A. camelii” from the three study locations was 40.3% (71/176). Kano state had the highest prevalence of 59.8% (55/92), followed by Jigawa with 37.9% (11/29) and Sokoto state with 9.1% (5/55) (Table ). GenBank analysis of representative sequences ( n = 15) selected from all the study locations with a product size of 345 bp showed 99.6–100% similarity to 16S rDNA of Anaplasma platys (GenBank: MH762081.1) and “ Ca . A. camelii” (GenBank: KF843827.1). In the attempt to differentiate these two species, semi-nested PCR targeting the 16S rRNA gene of Anaplasma spp. was used, generating a PCR product of 426 bp. BLASTn analysis of the sequences yielded “ Ca . A. camelii” (GenBank: KF843825.1) with the highest identity score of 100% (GenBank: KF843825.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN396629-MN396638. Babesia spp. and Anaplasma marginale DNA of neither pathogen was amplified in the blood of camels. Risk factors associated with “ Candidatus A. camelii” infection in blood of camels A higher number of female camels were infected as compared to males, although no significant difference ( P > 0.05) was observed (Table ). Furthermore, the prevalence was higher in camels > 5 years-old across the three study areas compared with those < 5 years-old old. A significant difference was observed between age groups ( P < 0.05) (Table ). Camels with poor or moderate body condition had higher infection rates with “ Ca . A. camelii” compared to those with a good body condition with a significant difference ( P < 0.05). Only one camel (20.0%, 1/5) with a good body condition score was infected in Sokoto state (Table ). Finally, camels infested with ticks were two times more likely to be infected with “ Ca . A. camelii” compared with those without ticks (OR: 1.59, 95% CI: 0.9–2.9). Phylogenetic and haplotype analysis of “ Ca . A. camelii” “ Candidatus A. camelii” nucleotide sequences from this study clustered together with all other “ Ca . A. camelii” sequences from Saudi Arabia (GenBank: KF843823-KF843825) and Egypt (GenBank: MG564235-MG564237) (Fig. ). In addition, A. platys sequences from a previous study in Nigeria clustered with the sequences from this study. Only one haplotype was found in this study (Fig. ), which is similar to the haplotype detected from other “ Ca . A. camelii” in Egypt and Saudi Arabia based on the sequences retrieved from the NCBI database. This haplotype differs slightly by a single mutation from A. platys of dogs in Malaysia (GenBank: KU500910). Furthermore, it also differs by 3 mutations from A. phagocytophilium and by 8 mutations from A. marginale (Fig. ). Of the 176 camels examined, 92 (52.3%) were infested with ticks from a total collection of 593. All ticks collected were identified as adult with no immature stages comprising of 440 (74.2%) males and 153 (25.8%) females. The largest number of ticks was collected from Kano (396; 66.8%) followed by Jigawa (145; 24.5%) and Sokoto (52; 8.8%) state (Table ). Altogether, 7 species of ticks were identified: Hyalomma dromedarii ( n = 465; 78.4%), H. truncatum ( n = 87; 14.7%), H. rufipes ( n = 19; 3.2%), H. impressum ( n = 2; 0.3%), H. impeltatum ( n = 18; 3.0%), Amblyomma variegatum ( n = 1; 0.2%) and Rhipicephalus evertsi evertsi ( n = 1; 0.2%) (Table ). Three tick species ( H. dromedarii , H. truncatum and H. rufipes ) were found in all three locations while H. impeltatum was found only in Kano and Sokoto. Amblyomma variegatum was collected in Kano state and R. e. evertsi in Jigawa state only and lastly, H. impressum in Kano state (Table ). To confirm the identity of these species of ticks, molecular identification was carried out. A BLASTn query of the obtained sequences for the 16S rRNA gene revealed a high identity match ranging from 98.9% to 100% for all tick species except for H. rufipes . Due to ambiguity of the 16S rRNA gene for H. rufipes , the 12S rRNA gene was amplified and a BLASTn query of the obtained sequences still could not clear this ambiguity. Lastly, we amplified the cox 1 gene followed by sequencing. The sequence analysis gave 99.8% homology with H. rufipes (GenBank: KX000641.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN394427-MN39444 ( 16S rRNA gene), MN394457-MN394461 ( 12S rRNA gene) and MN601291-MN601294 ( cox 1). Tick infestation rates were slightly higher in male camels across the three different locations with 65.6% (40 /61) in Kano, 63.2% (12/19) in Jigawa and 27.8% (10/36) in Sokoto compared with female camels with 64.5% (20/31), 60.0% (6/10) and 21.1% (4/19), respectively. No significant difference was observed between sexes across the study locations ( P > 0.05) (Table ). The infestation rate was significantly higher in camels > 5 years-old compared with those < 5 years-old and differed significantly ( P < 0.05) (Table ). The odds of infestation with ticks were higher in camels > 5 years-old from Kano (OR: 1.44, 95% CI: 0.49–4.18) compared with Jigawa (OR: 0.76, 95% CI: 0.15–4.30) and Sokoto states (OR: 0.49, 95% CI: 0.14–1.73) (Table ). Finally, camels with a good body condition score were significantly less infested with ticks across the three study locations compared with those with either poor or moderate body condition score ( P < 0.05) (Table ). Rickettsia spp. Altogether, 67 out of 231 tick pools (comprising of 593 ticks) produced bands of the correct length in the gltA PCR and all were sequenced. Out of those, 51 resulted in good quality sequences and could be evaluated as Rickettsia spp. Therefore, the minimum infection rate (MIR) for tick pools for Rickettsia spp. was 8.6%. Across the study locations, 31 pools were positive (MIR 7.8%) in Kano state, 14 pools (MIR 9.7%) in Jigawa state and 6 pools (MIR 11.5%) in Sokoto state (Table ). No significant difference ( P > 0.05) was observed between the different study locations. Rickettsia spp. was detected in four different tick species with H. rufipes having the highest MIR (36.8%) and H. impeltatum with the lowest MIR (5.6%). Others include H. truncatum (16.1%) and H. dromedarii (6.2%) (Table ). Following a BLASTn query on the NCBI database, 45 of these sequences showed 100% identity with R. aeschlimannii (GenBank: MH267736.1) and 6 showed high similarity scores ranging between 98.7–99.7% (GenBank: MH267736.1). BLASTn analysis of one of the sequences obtained from the gltA gene of Rickettsia spp., gave 100% homology with C. burnetii (GenBank: CP035112.1). To further confirm the genotypes of Rickettsia spp., ompA and ompB genes were partially amplified. We tested all gltA- positive samples with good quality sequences ( n = 51) for both ompA and ompB . For ompA and ompB , 39 and 43 tick pools, respectively were positive, from which 13 and 16 amplicons were selected for sequencing and gave good quality sequences. BLASTn analysis of these sequences obtained for both genes showed 99.9–100% similarity with R. aeschlimannii on GenBank. All newly generated sequences were deposited in the GenBank database under the accession numbers MN601304-MN601344 ( gltA ), MT126809-MT126818 ( omp A) and MN601295-MN601303 ( omp B). Anaplasma / Ehrlichia spp. The PCR targeting the 16S rRNA gene of Anaplasma/Ehrlichia spp. was positive in 62 out of 231 tick pools with a MIR of 10.5%. Based on location, the MIR in Sokoto state was 15.4% followed by Kano state with 10.1% while Jigawa state had the lowest MIR of 9.7%. No significant difference ( P > 0.05) was observed across the different sampling locations (Table ). Five different tick species were positive: H. dromedarii (MIR, 8.0%), H. truncatum (MIR, 16.1%), H. rufipes (MIR, 36.8%), H. impeltatum (MIR, 11.1%) and H. impressum (MIR, 100.0%) (Table ). BLASTn analysis of sequences obtained from positive samples of the Anaplasma/Ehrlichia PCR had sizes of 240 bp with 98–100% similarity to Peptoniphilus spp. (GenBank: LC145547.1), “ Candidatus Midichloria mitochondrii” (GenBank: KU559921.11) and a Rickettsiales bacterium (GenBank: DQ379964.1). Anaplasma marginale Anaplasma marginale DNA was not detected in DNA from ticks. Coxiella burnetii The DNA of C. burnetii was detected in 17 out of 231 tick pools with a MIR of 2.9%. The MIR for the states of Sokoto, Kano and Jigawa was 3.8%, 3.0% and 2.1%, respectively (Table ). No significant difference ( P > 0.05) was observed across the different study locations. Most C. burnetii positive tick pools were H. dromedarii (MIR 3.4%) and H. truncatum (MIR 1.1%) (Table ). Only 1 out of 87 (1.1%) H. truncatum ticks in the pools were positive to C. burnetti (Table ). Sequences had similarity scores ranging between 99.2–100% to C. burnetii (GenBank: CP035112.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN396571-MN396578. spp. Altogether, 67 out of 231 tick pools (comprising of 593 ticks) produced bands of the correct length in the gltA PCR and all were sequenced. Out of those, 51 resulted in good quality sequences and could be evaluated as Rickettsia spp. Therefore, the minimum infection rate (MIR) for tick pools for Rickettsia spp. was 8.6%. Across the study locations, 31 pools were positive (MIR 7.8%) in Kano state, 14 pools (MIR 9.7%) in Jigawa state and 6 pools (MIR 11.5%) in Sokoto state (Table ). No significant difference ( P > 0.05) was observed between the different study locations. Rickettsia spp. was detected in four different tick species with H. rufipes having the highest MIR (36.8%) and H. impeltatum with the lowest MIR (5.6%). Others include H. truncatum (16.1%) and H. dromedarii (6.2%) (Table ). Following a BLASTn query on the NCBI database, 45 of these sequences showed 100% identity with R. aeschlimannii (GenBank: MH267736.1) and 6 showed high similarity scores ranging between 98.7–99.7% (GenBank: MH267736.1). BLASTn analysis of one of the sequences obtained from the gltA gene of Rickettsia spp., gave 100% homology with C. burnetii (GenBank: CP035112.1). To further confirm the genotypes of Rickettsia spp., ompA and ompB genes were partially amplified. We tested all gltA- positive samples with good quality sequences ( n = 51) for both ompA and ompB . For ompA and ompB , 39 and 43 tick pools, respectively were positive, from which 13 and 16 amplicons were selected for sequencing and gave good quality sequences. BLASTn analysis of these sequences obtained for both genes showed 99.9–100% similarity with R. aeschlimannii on GenBank. All newly generated sequences were deposited in the GenBank database under the accession numbers MN601304-MN601344 ( gltA ), MT126809-MT126818 ( omp A) and MN601295-MN601303 ( omp B). / Ehrlichia spp. The PCR targeting the 16S rRNA gene of Anaplasma/Ehrlichia spp. was positive in 62 out of 231 tick pools with a MIR of 10.5%. Based on location, the MIR in Sokoto state was 15.4% followed by Kano state with 10.1% while Jigawa state had the lowest MIR of 9.7%. No significant difference ( P > 0.05) was observed across the different sampling locations (Table ). Five different tick species were positive: H. dromedarii (MIR, 8.0%), H. truncatum (MIR, 16.1%), H. rufipes (MIR, 36.8%), H. impeltatum (MIR, 11.1%) and H. impressum (MIR, 100.0%) (Table ). BLASTn analysis of sequences obtained from positive samples of the Anaplasma/Ehrlichia PCR had sizes of 240 bp with 98–100% similarity to Peptoniphilus spp. (GenBank: LC145547.1), “ Candidatus Midichloria mitochondrii” (GenBank: KU559921.11) and a Rickettsiales bacterium (GenBank: DQ379964.1). Anaplasma marginale DNA was not detected in DNA from ticks. The DNA of C. burnetii was detected in 17 out of 231 tick pools with a MIR of 2.9%. The MIR for the states of Sokoto, Kano and Jigawa was 3.8%, 3.0% and 2.1%, respectively (Table ). No significant difference ( P > 0.05) was observed across the different study locations. Most C. burnetii positive tick pools were H. dromedarii (MIR 3.4%) and H. truncatum (MIR 1.1%) (Table ). Only 1 out of 87 (1.1%) H. truncatum ticks in the pools were positive to C. burnetti (Table ). Sequences had similarity scores ranging between 99.2–100% to C. burnetii (GenBank: CP035112.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN396571-MN396578. spp. The MIR for Babesia spp. was 0.7% (4/593) across the study locations with 3 positives in Kano state (MIR 0.8%) and one in Jigawa state (MIR 0.7%) (Table ). No significant difference ( P > 0.05) was observed across the different sampling locations. Three out of the four positive pools were detected in H. dromedarii (MIR 0.6%) and one was detected in H. impeltatum (MIR 5.6%) (Table ). BLASTn analysis showed that 2 sequences showed 100% identity to B. occultans (GenBank: MG920540.1), 1 with 100% identity to B. caballi (GenBank: MG052892.1) and 1 showed 98.5% similarity to Babesia spp. (GenBank: KC249945.1). A further attempt to characterize the undifferentiated species of Babesia using a different primer pair showed 100% homology with Babesia spp. (GenBank: KC249946.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN394378-MN394381. A low co-detection rate was observed in the study with all co-detections occurring in Kano state only. Co-detection was observed for Rickettsia spp. + Babesia spp. in one tick pool as well as for Rickettsia spp. + C. burnetii in another tick pool. “Candidatus Anaplasma camelii” The overall prevalence of “ Ca . A. camelii” from the three study locations was 40.3% (71/176). Kano state had the highest prevalence of 59.8% (55/92), followed by Jigawa with 37.9% (11/29) and Sokoto state with 9.1% (5/55) (Table ). GenBank analysis of representative sequences ( n = 15) selected from all the study locations with a product size of 345 bp showed 99.6–100% similarity to 16S rDNA of Anaplasma platys (GenBank: MH762081.1) and “ Ca . A. camelii” (GenBank: KF843827.1). In the attempt to differentiate these two species, semi-nested PCR targeting the 16S rRNA gene of Anaplasma spp. was used, generating a PCR product of 426 bp. BLASTn analysis of the sequences yielded “ Ca . A. camelii” (GenBank: KF843825.1) with the highest identity score of 100% (GenBank: KF843825.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN396629-MN396638. Babesia spp. and Anaplasma marginale DNA of neither pathogen was amplified in the blood of camels. The overall prevalence of “ Ca . A. camelii” from the three study locations was 40.3% (71/176). Kano state had the highest prevalence of 59.8% (55/92), followed by Jigawa with 37.9% (11/29) and Sokoto state with 9.1% (5/55) (Table ). GenBank analysis of representative sequences ( n = 15) selected from all the study locations with a product size of 345 bp showed 99.6–100% similarity to 16S rDNA of Anaplasma platys (GenBank: MH762081.1) and “ Ca . A. camelii” (GenBank: KF843827.1). In the attempt to differentiate these two species, semi-nested PCR targeting the 16S rRNA gene of Anaplasma spp. was used, generating a PCR product of 426 bp. BLASTn analysis of the sequences yielded “ Ca . A. camelii” (GenBank: KF843825.1) with the highest identity score of 100% (GenBank: KF843825.1). The newly generated sequences were deposited in the GenBank database under the accession numbers MN396629-MN396638. spp. and Anaplasma marginale DNA of neither pathogen was amplified in the blood of camels. Candidatus A. camelii” infection in blood of camels A higher number of female camels were infected as compared to males, although no significant difference ( P > 0.05) was observed (Table ). Furthermore, the prevalence was higher in camels > 5 years-old across the three study areas compared with those < 5 years-old old. A significant difference was observed between age groups ( P < 0.05) (Table ). Camels with poor or moderate body condition had higher infection rates with “ Ca . A. camelii” compared to those with a good body condition with a significant difference ( P < 0.05). Only one camel (20.0%, 1/5) with a good body condition score was infected in Sokoto state (Table ). Finally, camels infested with ticks were two times more likely to be infected with “ Ca . A. camelii” compared with those without ticks (OR: 1.59, 95% CI: 0.9–2.9). Ca . A. camelii” “ Candidatus A. camelii” nucleotide sequences from this study clustered together with all other “ Ca . A. camelii” sequences from Saudi Arabia (GenBank: KF843823-KF843825) and Egypt (GenBank: MG564235-MG564237) (Fig. ). In addition, A. platys sequences from a previous study in Nigeria clustered with the sequences from this study. Only one haplotype was found in this study (Fig. ), which is similar to the haplotype detected from other “ Ca . A. camelii” in Egypt and Saudi Arabia based on the sequences retrieved from the NCBI database. This haplotype differs slightly by a single mutation from A. platys of dogs in Malaysia (GenBank: KU500910). Furthermore, it also differs by 3 mutations from A. phagocytophilium and by 8 mutations from A. marginale (Fig. ). This study confirmed the occurrence of several tick-borne pathogens and the species diversity of ticks infesting camels in Nigeria. The overall rate of tick infestation in this study was 52.3%, lower than 80.0% reported by Abdullahi et al. in Kebbi state, Nigeria. Differences between the two studies could be basically attributed to the smaller sample size in the latter study. Other factors possibly attributing to differences in tick infestations include geographical distribution, climatic factors, the management system as well as the frequency of acaricides application. We observed high numbers of male ticks compared with female ticks which is unsurprising considering the fact that the latter are known to detach from their host few days after feeding to oviposit, while the males stay longer for weeks before detaching . In this study, ticks from Nigeria were morphologically identified and confirmed using molecular markers targeting several genes. So far, studies on tick identification in camels from Nigeria were based on morphology . Combining several mitochondrial markers ( 12S , 16S and cox 1), we were able to identify several species of Hyalomma ticks. The use of these markers has been increasingly useful for tick identification in several studies . Hyalomma dromedarii was the most frequent tick species encountered in this study. Similar observations have also been reported by other researchers on ticks of camels in Nigeria , within other African countries and other parts of the world . The dromedary camel was found to be the preferred host for this tick species, but it also infests sheep, goats, cattle and horses . In another study conducted at a single site in Nigeria, Kamani et al. , reported H. impeltatum as the most prevalent tick species of camel. We attributed some reasons for these differences. First, we sampled three study locations in north-western Nigeria. Secondly, we sampled towards the end of rainy season while Kamani et al. sampled throughout the dry season. Nonetheless, previous studies on tick abundance of camels and the influence of season in both Kano and Sokoto states reported H. dromedarii as the most prevalent tick species of dromedary which was not influenced by season as this tick species showed preponderance on camels during both dry and wet seasons . Most likely, abiotic factors such as temperature and humidity may play a role in this observation, but this remains highly speculative. Hyalomma impressum was the least prevalent tick species with only two specimens collected. This corroborates the observation of previous studies on ticks of camels in Nigeria and Algeria where two and three specimens respectively were collected. It is likely that environment as well as sampling time could also impact the prevalence of H. impressum. All other species of the genus Hyalomma such as H. rufipes , H. truncatum and H. impeltatum as well as A. variegatum and R. evertsi evertsi have been reported infesting camels in Nigeria . Rickettsia aeschlimannii belongs to the spotted fever group of Rickettsia and is maintained and/or transmitted primarily by ticks. The MIR of R. aeschlimannii in ticks collected from camels was 7.8–11.5% across the three study locations (Kano, Jigawa and Sokoto states) and 5.6–36.8% across tick species. This confirms the presence of this species of Rickettsia in ticks infesting camels in Nigeria. A previous study in Kano, Nigeria reported an infection rate of 23.8% in ticks from camels . The prevalence of pathogens detected in ticks is shown here as the minimum infection rate, assuming that only one sample is positive in each positive pool. Of course, this approach is only approximate and prevalence rates for the identified pathogens might be higher than reported in the present study. All isolates in this study were identified as R. aeschlimannii , suggesting that the organism is endemic and widespread among Hyalomma tick species infesting camels in Nigeria. Furthermore, this bacterium has been detected in several countries within Africa in Hyalomma ticks . The highest prevalence of R. aeschlimannii DNA was detected in H. rufipes , which agrees with the report of previous studies in Nigeria , Egypt and Senegal . Furthermore, all species of Hyalomma ticks were positive for R. aeschlimannii DNA except H. impressum . Similar findings were registered in previous studies in Algeria and Nigeria . Piroplasms of the genera Babesia and Theileria are tick-borne pathogens of livestock including camels. The overall MIR of piroplasms ( Babesia spp.) in ticks from this study was low in addition to the non-detection of these protozoan parasites in the blood. This corroborates with previous reports on piroplasms (both Babesia spp. and Theileria spp.) of camels . Previously, a low prevalence of Theileria ovis was reported in blood of camels from Sokoto, Nigeria using reverse line blot (RLB) and in H. dromedarii ticks in Saudi Arabia . Nevertheless, a high prevalence of 74.5% has been registered in the blood of camels in Sudan . In the present study, B. occultans DNA was amplified and confirmed by sequencing in Hyalomma ticks ( H. impeltatum and H. dromedarii ) for the first time after its first morphological description in the haemolymph of Hyalomma ticks over three decades ago in Nigeria . The DNA of B. occultans has been detected in other species of ticks such as H. asiaticum in China , H. marginatum in Tunisia and Rhipicephalus turanicus and H. marginatum rufipes in Turkey . In addition, DNA of this pathogen has also been detected in the blood of cows in Italy displaying fever, anemia, and hematological alterations . Furthermore, DNA of B. caballi was amplified in a H. dromedarii tick. The detection of B. caballi in our study may not be surprising considering the fact that both camels and horses are infested with similar tick species . Previous studies on camel piroplasms have detected B. caballi in the blood of camels in Sudan , Jordan and Iraq . The low infection rate of C. burnetii in Hyalomma tick species reported in the present study is comparable with that reported elsewhere for Hyalomma ticks . Most of the positives were detected in H. dromedarii and only one in a H. truncatum tick (1.1%). In a similar study in Egypt, C. burnetii was detected in H. dromedarii exclusively , while in China, most of the infection was in H . asiaticum asiaticum . Furthermore, while Coxiella -like bacteria have been found in ticks as endosymbionts and play a role in tick fitness, C. burnetii is responsible for Q fever in vertebrates including humans . Since ticks serve as a carrier of C. burnetii in livestock, the close association between man and livestock could probably lead to human infections . An epidemiological survey among veterinarians and other high-risk individuals with regular contact with animals showed a high antibody titre to C. burnetii , suggesting possible transmission . Anaplasmosis in camels due to Anaplasma marginale causes subclinical disease as registered in other studies . In our study, camels from the three study areas tested positives to a novel species of Anaplasma named “ Ca . A. camelii” by Bastos et al. . This species is genetically related to A. platys . An earlier study in one of the study areas (Sokoto) reported a high prevalence for A. platys in camels . The overall prevalence of “ Ca . A. camelii” in camels in our study was 40.3%, which is comparable to results reported by Lbacha et al. in Morocco, but higher compared with data from China (7.20%) and Tunisia (17.70%) . The variations in prevalence rates may result from differences in husbandry practices, tick control programmes and reservoir hosts . Phylogenetic analyses in our study based on DNA sequencing clusters the A. platys reported earlier in one of the study areas (Sokoto state) (GenBank: KJ832066-KJ832067) with 99.5% identity to that obtained in our study (GenBank: MN396629-MN396638). It is therefore possible that the A. platys as earlier reported by Lorusso et al. , could be “ Ca . A. camelii”. The 16S sequences generated in the present study were identical to other “ Ca . A. camelii” isolated in Saudi Arabia . The haplotype analysis in our study shows that only one nucleotide differentiates A. platys with “ Ca . A. camelii”, an observation similar to that observed by Sazmand et al. in Iran. Risk factors associated with “ Ca . A. camelii” infection indicate that the female camels were more infected compared with the males corroborating previous studies . Immunosuppression associated with pregnancy and lactation has been attributed to be responsible for this observation . The opposite was the case with respect to tick infestation, as more male camels were infested compared with females. It could also be that the male camels despite being more infested with ticks due to their natural behavior for space triggered by androgenic hormones, had better immunity against tick-borne infection than the females. A poor body condition score was a risk factor to both tick infestation and infection with “ Ca . A. camelii” infection. Older animals were more often infested with ticks as well as positive for “ Ca . A. camelii” DNA than younger camels. According to Azmat et al. , the infection rate of camels with anaplasmosis increases with age. The occurrence of “ Ca . A. camelii” infection in our study was positively associated with the presence of ticks. This finding confirms previous reports on anaplasmosis of camels . Also, A. marginale is responsible for bovine intra-erythrocytic anaplasmosis in bovines, but we did not find A. marginale in the investigated camels and ticks, an observation that has also been reported by other researchers . Furthermore, it has been postulated that dromedaries are not relevant reservoirs for already named Anaplasma species which include A. marginale , A. centrale , A. phagocytophilum and A. bovis . This study revealed the occurrence of different tick species and different tick-borne pathogens in ticks infesting camels as well as in their blood in Nigeria. We identified several subspecies of Hyalomma ticks and their associated tick-borne pathogens. Pathogen DNA detected in ticks using PCR and sequencing includes R. aeschlimannii , B. caballi , Babesia spp. and C. burnetii . Furthermore, we amplified B. occultans DNA in Hyalomma ticks infesting camels in Nigeria. “ Candidatus A. camelii”, a novel species variant of Anaplasma , was the only pathogen amplified in the blood of the investigated camels. The detection of the two zoonotic pathogens, R. aeschlimannii and C. burnetii , may necessitate further investigation on the role of camels in their maintenance and reservoir status. |
Pandemic lifeworlds: A segmentation analysis of public responsiveness to official communication about Covid-19 in England | 55eab149-70f7-411d-840c-0701ac07aece | 10830050 | Health Communication[mh] | Faced in 2020 with Covid-19, a worldwide threat that could only be tackled through concerted public action, it was a fundamental duty of governments throughout the world to formulate and disseminate clear and practicable guidance that would reach and guide all sections of the public, regardless of background or outlook. Doing so presented a formidable communicative challenge, for the public is not a homogeneous entity and the coordination of mass collective action in the face of a common threat is bound to be complicated by the different ways in which people experience the threat and receive, interpret and act upon official guidance relating to it. There is strong empirical evidence to suggest that generic, one-size-fits-all messages about public health are less effective than ones that are sensitive to the attitudinal perspectives and experiential lifeworlds of specific social groups . Segmentation techniques, which emerged originally as a means of differentiating commercial market demand with a view to targeting distinctive groups , employ cluster analysis or related methods such as latent class analysis to identify groups within a large population that exhibit patterns of responses across a complex set of variables . As such, they provide an important complement to more qualitative or descriptive efforts to understand population issues in the pandemic . In terms of public health message targeting, breaking up a heterogeneous audience into relatively more homogeneous audiences permits message content to be geared to the life experiences, worldviews and social opportunities and constraints of specific population segments, and dissemination channels preferred by those segments. Accordingly, there have been several audience segmentation studies in the context of the Covid-19 pandemic. Some were based on subpopulations of interest, e.g. identifying differences by race, ethnicity, and age in vaccination attitudes and behavior for Medicaid parents in Florida, USA , and a study of a single U.S. county based on variables available in a longitudinal study designed originally for other purposes . Others were national in scope but limited in the range of variables studied: e.g. Kamenidou et al. surveyed 3359 Greek respondents and identified five segments based on a continuum of self-reported Covid protective behavior but had little additional data profiling attitudes, trust, and media use . Schneider et al. used an 11-item instrument including perceptions about vaccine efficacy and politicization of Covid vaccination with a sample of 583 U.S. adults . Stubenvoll focused on Covid-related misperceptions and false beliefs among a sample of 913 Austrians, providing a rich profile of acceptance and rejection of scientific (mis)information, though not of attitudes and beliefs about Covid or themselves that might inform intervention design . Ihm and Lee conducted a survey of 723 South Korean adults segmenting on social resources, social support, and media use, providing provocative insights regarding well-resourced versus vulnerable populations though without accompanying psychosocial data typically included in health audience segmentation . Other national surveys were more robust with respect to psychosocial determinants of health behavior that are typically recommended for incorporation into health audience segmentation analyses . For example, Thaker et al. surveyed 1054 Australian adults regarding vaccine intentions and behavior with 16 items based on the Theory of Planned Behavior plus items on media, doctor, and official source trust . This analysis found a five-segment solution, also representing a continuum of vaccine enthusiasm but also identifying a segment ambivalent about vaccination personally but motivated to protect the health of others. They found that trust in all channels except social media were highest for what they called Vaccine Enthusiasts (a group who were predominantly male); trust tended to decline with reduced vaccine support except for trust in social media. They called for future research to cast a wider net in terms of determinants of Covid behaviors than TPB alone. More ambitiously, Zhou, Li, and Shen drew on several theories of health determinants in addition to TPB in a segmentation study sampling 1041 Americans, again focusing on vaccine hesitancy . This approach yielded relatively rich profiles, capturing some complexity in ambivalence, in different levels of risk perception, perceived vaccine efficacy, among other psychosocial variables, though lacking the data on source trust provided by Thaker et al. . Our study adds to this body of research in four ways. First, it addresses a gap in existing Covid-19 audience segmentation research by combining rich psychosocial detail about audience responses—using dozens of items beyond the Theory of Planned Behaviour measures used in prior studies—with data on information source trust and use such as that used by Thaker et al, to provide the most developed audience segmentation profile concerning Covid-19 to date. Second, it is based upon a national survey of a core nationally representative sample of over 5000 English adults (with quotas set for age, gender, region, and social grade), with boosters used to achieve larger samples for ethnic minorities, providing a robust national-level profile, and one that can be usefully compared to the other efforts worldwide described above. Third, our segments are profiled against behaviours such as masking and social distancing as well as vaccination, the focus of most prior Covid segmentation research. Fourth, our study emerged from a collaboration between a national government agency that has been central to the design and implementation of health communication during the pandemic (the UK Health Security Agency, UKHSA); a leading market research survey company (Savanta) and the University of Leeds. It is rare for segmentation studies to be so closely embedded in national institutions that are responsible for critical policy actions, or to combine efforts of academics, professional market researchers, and government agencies. This combination allowed us to work with a variety of perspectives in the design and analysis of the study and may provide some lessons for such collaborations in future. In summary, we ask the following key research questions. What are the major audience segments, drawing on relevant attitudes, motivations, and other determinants of health behavior, identifiable with respect to pandemic/Covid-19 communication in the UK? How do these segments compare with one another, with respect to key attitudes, demographics, and communication channels trusted and used? In addition, we compare how well the segments we find predict protective public health behaviours and vaccination refusal relative to the demographic variables typically associated with such behaviours. Finally, we also discuss some methodological lessons and insights for public health managers and communicators regarding communication strategies that might be most appropriate with these various population segments. The University of Leeds Faculty of Arts Humanities and Culture Ethics Review Board approved this study (reference: FAHC 20–093 AHC FREC) Members of the survey panel were recruited by a third party (the market research company, Savanta) and we (the researchers) did not have access to their names or addresses. Panel members signed written consent with the third party. Survey data collection The market researchers conducted 5,525 surveys online in England between 4 and 24 January 2022 and 105 surveys via telephone between 26 January and 7 March. The questions were then divided into 12 blocks of related items and for each we measured the respondent-level variability. A set of rules were devised to identify respondents who showed little or no variation in their responses (either across multiple blocks or overall). Some blocks were omitted from consideration since it was deemed that a consistent response may be reasonable (e.g. satisfaction with various aspects of ones’ life). We also plotted respondent-level mean and variance for several key blocks of attitudinal statements. A key feature of the plots was the existence of a small proportion of respondents who completely agreed with (almost) every statement (seen as low variability, high mean), even when statements had contradictory meanings. These respondents were also removed from further analysis. In total 329 respondents were removed, leaving 5,178 for the segmentation analysis. The resulting 5,178 online respondents comprised a core, nationally representative sample of UK adults with quotas set for age, gender, region, and social grade, with boosters used to achieve larger samples for ethnic minorities (1,405), those in deciles 1–3 of the Index of Multiple Deprivation (1,975), and those in 20 local authorities that had seen particularly enduring levels of Covid-19 transmission (558). The additional 105 surveys conducted via landline telephone were with people who were digitally excluded (defined as never having used the internet or not having used it in the last three months), for a total of 5,283 included in the segmentation cluster analysis. 51% of the sample were female and 49% male. 83% were white British, with the remaining 17% including persons of Indian, Pakistani, Bangladeshi, and African origin as the largest ethnic minority groups. Development of questionnaire instrument This project was a collaboration between the UKHSA, market researchers and academic researchers. An initial inventory of questions was developed utilizing items that had proved useful in related previous consumer research, had been used in government tracking efforts previously, or were of particular policy concern to UKHSA. These were supplemented by the academic researchers to better approximate measurement of variables of theoretical interest. Items used to create measures that shaped the segmentation analysis are discussed below; other individual items used to profile segments (e.g., vaccination status, two-week self-reported protective behaviours, media trust, and preference) are identified in the results section. Given the eclectic development of the survey items, measures were created empirically using exploratory factor analysis. Initially the questions were divided into three thematically similar batches, to make exploratory factor analysis more manageable, and factor analysis was used to identify reliable measures. k -means clustering was used to generate six clusters, five of which were found to be readily interpretable, whilst the sixth appeared ambiguous and contradictory. In particular, the segment was found to agree largely with the agree/disagree Likert-type attitudinal statements (even contradictory statements)—a phenomenon known variously as courtesy or acquiescence bias ; other items, such as behavioral self-reports, did show consistency and variability across the range of the scale. Several attempts were then made to remove unreliable respondents (in some cases such patterns appeared to be due to people rushing through the questions, so we tried increasing the maximum elapsed time required to accept a respondent), but this was unable to resolve the issues with response bias posed by this segment, leading us to conclude the problem was largely due to acquiescence bias. It was of particular concern, as the segment had a disproportionately large percentage of ethnic minority respondents, and the literature suggested that cultural differences could result in differential patterns of acquiescence bias . I.e., in the literature, some cultural groups were disinclined to openly express outright disagreement, though the extent of agreement did vary, a pattern reflected for this group of our respondents. To address this problem, the market research firm overseeing the segmentation analysis engaged a consultant (co-author Phil Wright) with expertise in addressing complex and difficult market research challenges, to oversee and conduct further data analysis and produce a segmentation scheme that addressed the problem of acquiescence bias among some respondents. The first step was to standardize the data. In most cases this was to ensure that question means were zero and standard deviations were one. For two blocks of questions however (where scale usage by the respondents varied considerably and where the acquiescence bias was most evident), standardization was instead performed at a respondent level rather than question level. This adjusts for acquiescence bias by looking at variation from the respondent’s own mean, thereby eliminating the problem of some respondents having a mean shifted in the positive direction due to a tendency to agree with statements. The question levels’ means and standard deviations were found to be close to 0 and 1 respectively. The standardization procedures did impact factor structures, requiring recreation of the measures. Splitting the data and iteratively creating factors yielded 33 factors. Due to the independent nature of their construction several of these were highly correlated. Since 33 factors were too many to be considered for clustering and because several themes overlapped a pragmatic decision was made to use factor analysis to combine these into just 14 ‘meta-factors’. These meta-factors allowed us to work with a manageable set of composite variables. The segmentation scheme was initially developed using these initial 14 meta-factors, and then we reproduced the scheme using a refined set of clearly reliable factors (described below). Segmentation analysis procedures It should be noted at the outset that segmenting audiences for marketing and communication purposes is typically an iterative process that involves both judgment and empiricism. k- means clustering is typically used in audience segmentation, but the choice of variables used in creating such clusters, and the number of clusters selected, is based on utility and interpretability (e.g., ). k- means clustering was supplemented using multi-dimensional scaling, which can provide additional conceptual insight about underlying differences between segments. The following is a summary of the steps taken in developing the segmentation and in determining the number of clusters to be used in our analysis. We applied k -means clustering to the 14 factors and then to a sub-set of these factors; however, no convincing solution emerged after several attempts. Multi-dimensional scaling was used to produce a two-dimensional plot of the different factors. Each respondent was assigned to the map (using their weighted position based upon the 14 factors) and was then assigned polar co-ordinates. Those with a radius greater than a given cut-off (varied but used to remove the most neutral of respondents) were divided into 36 sectors. Here the profiling was found to be highly predictable (a series of sinewaves each lagged by a different amount to reflect the spread of the 14 meta-factors around the plot), but unfortunately not as discriminatory as we had hoped. Then, the above multi-dimensional scaling approach was replicated but using only two metrics (found by forcing the 14 meta-factors into two dimensions). These broadly measured (i) engagement/concern with Covid-19, and (ii) personal responsibility / ownership of ones’ health. This was a little clearer but still failed to produce a workable solution. In using two dimensions, whilst easier to explain, we had moved too far away from the multi-dimensional nature of the data. Next, k -means clustering was applied to the two metrics above. Here most solutions were found to be highly unstable as determined by plotting segment positions using multi-dimensional scaling for approximately 20 different clustering runs for different numbers of segments. Only one input resulted in near-identical results every time: five clusters but with the central group of about 2% of respondents removed. The central group were found to be those with average responses across most/all factors. That is, they had no strong opinions either way. Unsurprisingly, such respondents reduced the clarity of the segmentation scheme. Using the above result as a starting point these five clusters were profiled. Four of the five were similar to the original segments created in the initial k- means analysis mentioned above (although in all cases a little clearer and easier to interpret). A fifth cluster was new. Of the two segments that no longer appeared in our analysis, one was the earlier identified problem segment that had been characterized by acquiescence bias, the other was an interesting segment that we wished to retain. For each of the five new clusters we sought to identify sub-clusters using both k -means clustering and various sensible partitions using factors or questions. In each case nothing worth retaining was produced. The ‘missing’ segment was recreated by re-allocating respondents of this segment to a sixth segment (effectively creating a segment identical to the original but changing the nature of the other segments). Re-profiling suggested that–in the main–this was a positive change to the segmentation. We then considered the central segment and agreed that this was in fact a quite reasonable segment in its own right (people without a strong or distinctive point of view on this issue). This meant we did not have to drop the central 2% of respondents. This resulted in seven segments in total. Finally, a matrix of Euclidean distances was created between all respondents and the centers of the seven segments. By then creating a confusion matrix (comparing current and closest segments) we were able to iteratively move respondents to their nearest segment whilst examining the impact on the segment profiles as we progressed. Fortunately, when all respondents were assigned to their nearest cluster the result was sharper and clearer (because the dampening effect of the central segment was removed from the other segments). This process was effectively the same as running k -means clustering with just one iteration. Figs 1 and 2 in provide the figures representing the multidimensional scaling conducted to assess interpretability of the segments identified. Having agreed on the resulting solution we then applied the segmentation to the booster sample using Quadratic Discriminant Analysis (QDA) (training accuracy = 88%). Most incorrectly allocated respondents in the training data set were assigned to adjacent segments. An initial segmentation pass was conducted using empirically derived factors. We then set about refining factor reliability. For the core factors necessary to rebuild the segmentation (ten in total), eight had performed sufficiently well (Cronbach’s alpha > 0.7), but two were well below the required threshold. To address this and demonstrate that the resultant segmentation was based upon reliable factors the segmentation was recreated (using distance to segment centroid as described above) but using only factors that were deemed reliable. The two factors that had inadequate reliability (as tested with Cronbach’s alpha) were reworked using other items until we arrived at a measure that was reasonably equivalent conceptually and was reliable. We then used the necessary ten factors and compared our results with the final segmentation using a confusion matrix . Since it is nearly impossible to exactly replicate a segmentation from centroids alone (perturbing the data slightly can easily result in a similar but not exact match) an informal threshold was identified for a respondent to be moved to a new segment. This effectively prevented respondents on the edge of a segment from making a relatively small move to the adjacent segment and thus being recorded as being assigned incorrectly. A simple bootstrap (resampling with replacement) allowed us to estimate a reasonable threshold for change. Once applied, our re-build segmentation, using entirely reliable factors, matched 98.4% with the final segmentation. We therefore concluded that the original segmentation scheme was essentially identical with one built with all factors over a .7 Cronbach alpha threshold. A list of the factor reliability scores and items comprising each factor is provided in for the final refined factors. These factors are described below. Factors were computed using factor loadings (where the direction of the items differed, this was reflected in the use of positive or negative factor loadings, which served in effect to reverse-code those items). All items comprising each factor in the EFA were included even where there was cross-loading, as the factors are used to generate the cluster analysis and are not used in associational analyses that would be problematic given such cross-loadings. The focus in understanding the clusters should be on the profiling variables not included in the factors, as described in the results below. The first factor, manageability of Covid-19 risks , was constructed using 22 items (Cronbach’s alpha = .864). Sample items include: “life is too short to be worrying too much about Covid-19 risks” and “based on my experience, Covid-19 is not a threat”. (All items comprising this and the other factors described below are listed in ). The second factor, effectiveness of protective behaviors , was constructed using 35 items (alpha = .962). Sample items asked about effectiveness of wearing face masks, vaccines, and testing. 14 items comprised the third factor, concern about Covid-19 risks (alpha = .941), and included items about the extent to which Covid-19 posed a severe risk to the respondent, family members, and the UK population as a whole, as well as questions about worry overall and engaging in various activities. The fourth factor, personal well-being (alpha = .779), included a dozen items asking about physical, mental, and financial health. Self-care (alpha = .861) was the fifth factor; four items addressed efforts to maintain a healthy lifestyle, exercise, and diet. Sixth was sociability/sensation-seeking (alpha = .754), with six items addressing importance of socializing, enjoyment of risk-taking and novel experience, and impulsiveness. Seventh was covid-19 self-reliance (alpha = .71), with five items concerning personal responsibility for health and decision-making concerning Covid-19-related behaviors. Anxiety about world (alpha = 7.92) was eighth, with six items regarding worry about climate change, air pollution, antibiotic resistance, etc. The ninth factor was comprehension/trust in official guidance (alpha = 762), with three items about how easy or difficult it was to make sense of official guidance about Covid-19 and whether government and politicians had given honest and clear information. The tenth factor, personal health anxiety (alpha = .825), included five items concerning caution about going back to normal, concern about crowded spaces, and personal risk of Covid-19. The refined and reliable factors were used to create the segments described and profiled in the Results section below. The market researchers conducted 5,525 surveys online in England between 4 and 24 January 2022 and 105 surveys via telephone between 26 January and 7 March. The questions were then divided into 12 blocks of related items and for each we measured the respondent-level variability. A set of rules were devised to identify respondents who showed little or no variation in their responses (either across multiple blocks or overall). Some blocks were omitted from consideration since it was deemed that a consistent response may be reasonable (e.g. satisfaction with various aspects of ones’ life). We also plotted respondent-level mean and variance for several key blocks of attitudinal statements. A key feature of the plots was the existence of a small proportion of respondents who completely agreed with (almost) every statement (seen as low variability, high mean), even when statements had contradictory meanings. These respondents were also removed from further analysis. In total 329 respondents were removed, leaving 5,178 for the segmentation analysis. The resulting 5,178 online respondents comprised a core, nationally representative sample of UK adults with quotas set for age, gender, region, and social grade, with boosters used to achieve larger samples for ethnic minorities (1,405), those in deciles 1–3 of the Index of Multiple Deprivation (1,975), and those in 20 local authorities that had seen particularly enduring levels of Covid-19 transmission (558). The additional 105 surveys conducted via landline telephone were with people who were digitally excluded (defined as never having used the internet or not having used it in the last three months), for a total of 5,283 included in the segmentation cluster analysis. 51% of the sample were female and 49% male. 83% were white British, with the remaining 17% including persons of Indian, Pakistani, Bangladeshi, and African origin as the largest ethnic minority groups. This project was a collaboration between the UKHSA, market researchers and academic researchers. An initial inventory of questions was developed utilizing items that had proved useful in related previous consumer research, had been used in government tracking efforts previously, or were of particular policy concern to UKHSA. These were supplemented by the academic researchers to better approximate measurement of variables of theoretical interest. Items used to create measures that shaped the segmentation analysis are discussed below; other individual items used to profile segments (e.g., vaccination status, two-week self-reported protective behaviours, media trust, and preference) are identified in the results section. Given the eclectic development of the survey items, measures were created empirically using exploratory factor analysis. Initially the questions were divided into three thematically similar batches, to make exploratory factor analysis more manageable, and factor analysis was used to identify reliable measures. k -means clustering was used to generate six clusters, five of which were found to be readily interpretable, whilst the sixth appeared ambiguous and contradictory. In particular, the segment was found to agree largely with the agree/disagree Likert-type attitudinal statements (even contradictory statements)—a phenomenon known variously as courtesy or acquiescence bias ; other items, such as behavioral self-reports, did show consistency and variability across the range of the scale. Several attempts were then made to remove unreliable respondents (in some cases such patterns appeared to be due to people rushing through the questions, so we tried increasing the maximum elapsed time required to accept a respondent), but this was unable to resolve the issues with response bias posed by this segment, leading us to conclude the problem was largely due to acquiescence bias. It was of particular concern, as the segment had a disproportionately large percentage of ethnic minority respondents, and the literature suggested that cultural differences could result in differential patterns of acquiescence bias . I.e., in the literature, some cultural groups were disinclined to openly express outright disagreement, though the extent of agreement did vary, a pattern reflected for this group of our respondents. To address this problem, the market research firm overseeing the segmentation analysis engaged a consultant (co-author Phil Wright) with expertise in addressing complex and difficult market research challenges, to oversee and conduct further data analysis and produce a segmentation scheme that addressed the problem of acquiescence bias among some respondents. The first step was to standardize the data. In most cases this was to ensure that question means were zero and standard deviations were one. For two blocks of questions however (where scale usage by the respondents varied considerably and where the acquiescence bias was most evident), standardization was instead performed at a respondent level rather than question level. This adjusts for acquiescence bias by looking at variation from the respondent’s own mean, thereby eliminating the problem of some respondents having a mean shifted in the positive direction due to a tendency to agree with statements. The question levels’ means and standard deviations were found to be close to 0 and 1 respectively. The standardization procedures did impact factor structures, requiring recreation of the measures. Splitting the data and iteratively creating factors yielded 33 factors. Due to the independent nature of their construction several of these were highly correlated. Since 33 factors were too many to be considered for clustering and because several themes overlapped a pragmatic decision was made to use factor analysis to combine these into just 14 ‘meta-factors’. These meta-factors allowed us to work with a manageable set of composite variables. The segmentation scheme was initially developed using these initial 14 meta-factors, and then we reproduced the scheme using a refined set of clearly reliable factors (described below). It should be noted at the outset that segmenting audiences for marketing and communication purposes is typically an iterative process that involves both judgment and empiricism. k- means clustering is typically used in audience segmentation, but the choice of variables used in creating such clusters, and the number of clusters selected, is based on utility and interpretability (e.g., ). k- means clustering was supplemented using multi-dimensional scaling, which can provide additional conceptual insight about underlying differences between segments. The following is a summary of the steps taken in developing the segmentation and in determining the number of clusters to be used in our analysis. We applied k -means clustering to the 14 factors and then to a sub-set of these factors; however, no convincing solution emerged after several attempts. Multi-dimensional scaling was used to produce a two-dimensional plot of the different factors. Each respondent was assigned to the map (using their weighted position based upon the 14 factors) and was then assigned polar co-ordinates. Those with a radius greater than a given cut-off (varied but used to remove the most neutral of respondents) were divided into 36 sectors. Here the profiling was found to be highly predictable (a series of sinewaves each lagged by a different amount to reflect the spread of the 14 meta-factors around the plot), but unfortunately not as discriminatory as we had hoped. Then, the above multi-dimensional scaling approach was replicated but using only two metrics (found by forcing the 14 meta-factors into two dimensions). These broadly measured (i) engagement/concern with Covid-19, and (ii) personal responsibility / ownership of ones’ health. This was a little clearer but still failed to produce a workable solution. In using two dimensions, whilst easier to explain, we had moved too far away from the multi-dimensional nature of the data. Next, k -means clustering was applied to the two metrics above. Here most solutions were found to be highly unstable as determined by plotting segment positions using multi-dimensional scaling for approximately 20 different clustering runs for different numbers of segments. Only one input resulted in near-identical results every time: five clusters but with the central group of about 2% of respondents removed. The central group were found to be those with average responses across most/all factors. That is, they had no strong opinions either way. Unsurprisingly, such respondents reduced the clarity of the segmentation scheme. Using the above result as a starting point these five clusters were profiled. Four of the five were similar to the original segments created in the initial k- means analysis mentioned above (although in all cases a little clearer and easier to interpret). A fifth cluster was new. Of the two segments that no longer appeared in our analysis, one was the earlier identified problem segment that had been characterized by acquiescence bias, the other was an interesting segment that we wished to retain. For each of the five new clusters we sought to identify sub-clusters using both k -means clustering and various sensible partitions using factors or questions. In each case nothing worth retaining was produced. The ‘missing’ segment was recreated by re-allocating respondents of this segment to a sixth segment (effectively creating a segment identical to the original but changing the nature of the other segments). Re-profiling suggested that–in the main–this was a positive change to the segmentation. We then considered the central segment and agreed that this was in fact a quite reasonable segment in its own right (people without a strong or distinctive point of view on this issue). This meant we did not have to drop the central 2% of respondents. This resulted in seven segments in total. Finally, a matrix of Euclidean distances was created between all respondents and the centers of the seven segments. By then creating a confusion matrix (comparing current and closest segments) we were able to iteratively move respondents to their nearest segment whilst examining the impact on the segment profiles as we progressed. Fortunately, when all respondents were assigned to their nearest cluster the result was sharper and clearer (because the dampening effect of the central segment was removed from the other segments). This process was effectively the same as running k -means clustering with just one iteration. Figs 1 and 2 in provide the figures representing the multidimensional scaling conducted to assess interpretability of the segments identified. Having agreed on the resulting solution we then applied the segmentation to the booster sample using Quadratic Discriminant Analysis (QDA) (training accuracy = 88%). Most incorrectly allocated respondents in the training data set were assigned to adjacent segments. An initial segmentation pass was conducted using empirically derived factors. We then set about refining factor reliability. For the core factors necessary to rebuild the segmentation (ten in total), eight had performed sufficiently well (Cronbach’s alpha > 0.7), but two were well below the required threshold. To address this and demonstrate that the resultant segmentation was based upon reliable factors the segmentation was recreated (using distance to segment centroid as described above) but using only factors that were deemed reliable. The two factors that had inadequate reliability (as tested with Cronbach’s alpha) were reworked using other items until we arrived at a measure that was reasonably equivalent conceptually and was reliable. We then used the necessary ten factors and compared our results with the final segmentation using a confusion matrix . Since it is nearly impossible to exactly replicate a segmentation from centroids alone (perturbing the data slightly can easily result in a similar but not exact match) an informal threshold was identified for a respondent to be moved to a new segment. This effectively prevented respondents on the edge of a segment from making a relatively small move to the adjacent segment and thus being recorded as being assigned incorrectly. A simple bootstrap (resampling with replacement) allowed us to estimate a reasonable threshold for change. Once applied, our re-build segmentation, using entirely reliable factors, matched 98.4% with the final segmentation. We therefore concluded that the original segmentation scheme was essentially identical with one built with all factors over a .7 Cronbach alpha threshold. A list of the factor reliability scores and items comprising each factor is provided in for the final refined factors. These factors are described below. Factors were computed using factor loadings (where the direction of the items differed, this was reflected in the use of positive or negative factor loadings, which served in effect to reverse-code those items). All items comprising each factor in the EFA were included even where there was cross-loading, as the factors are used to generate the cluster analysis and are not used in associational analyses that would be problematic given such cross-loadings. The focus in understanding the clusters should be on the profiling variables not included in the factors, as described in the results below. The first factor, manageability of Covid-19 risks , was constructed using 22 items (Cronbach’s alpha = .864). Sample items include: “life is too short to be worrying too much about Covid-19 risks” and “based on my experience, Covid-19 is not a threat”. (All items comprising this and the other factors described below are listed in ). The second factor, effectiveness of protective behaviors , was constructed using 35 items (alpha = .962). Sample items asked about effectiveness of wearing face masks, vaccines, and testing. 14 items comprised the third factor, concern about Covid-19 risks (alpha = .941), and included items about the extent to which Covid-19 posed a severe risk to the respondent, family members, and the UK population as a whole, as well as questions about worry overall and engaging in various activities. The fourth factor, personal well-being (alpha = .779), included a dozen items asking about physical, mental, and financial health. Self-care (alpha = .861) was the fifth factor; four items addressed efforts to maintain a healthy lifestyle, exercise, and diet. Sixth was sociability/sensation-seeking (alpha = .754), with six items addressing importance of socializing, enjoyment of risk-taking and novel experience, and impulsiveness. Seventh was covid-19 self-reliance (alpha = .71), with five items concerning personal responsibility for health and decision-making concerning Covid-19-related behaviors. Anxiety about world (alpha = 7.92) was eighth, with six items regarding worry about climate change, air pollution, antibiotic resistance, etc. The ninth factor was comprehension/trust in official guidance (alpha = 762), with three items about how easy or difficult it was to make sense of official guidance about Covid-19 and whether government and politicians had given honest and clear information. The tenth factor, personal health anxiety (alpha = .825), included five items concerning caution about going back to normal, concern about crowded spaces, and personal risk of Covid-19. The refined and reliable factors were used to create the segments described and profiled in the Results section below. Characterizing the final segmentation solution Given the richness and complexity of these data, we summarize our observations about each of the segments below in a narrative form, based on our review of the profiling differences. This should provide an understanding of the segment names and labels, and thus make perusal of the tables detailing differences between segments more intelligible. We refer to our first segment as the Trusting Compliers (14% of the population). Members of this segment tend to follow official guidance and are able to do so without major loss or inconvenience to their everyday lifestyles. 56% are male. One in three (33%) are in socio-economic grades A and B, which is 10% more than the population average. Their mean age is 58 (10 years older than the average for the population). 62% are in work and 54% have children in their household. Trusting Compliers are more interested than any other segment in acquiring information about the pandemic. Most find such information easy to understand. They trust medical professionals to give them good advice and have strong trust in mainstream media, such as TV, radio and newspapers. Nine out of ten members of this segment (91%) report complying with official advice relating to the pandemic. The second segment are the Concerned Cooperators (14% of the population). Members of this segment try to do what is expected of them but are not always sure about what that official advice is or whether it can be trusted. The segment gender split (51% female) reflects the national average. Over half are in socio-economic grades C1 and C2, making them quite close of the population average. Their mean age is 54. 67% are in work and 55% have children in their household. Like the Trusting Compliers, the Concerned Cooperators are interested in acquiring information about the pandemic, but approximately two thirds of them do not find the guidance they are offered easy to understand. They tend to trust messages from the mainstream media, such as TV, radio and newspapers, and they trust medical professionals to give them good advice. 86% of the members of this segment comply with official advice relating to the pandemic. The third segment are the Fearful and Overwhelmed (13% of the population). These people tend to feel scared and lost, often confused by the guidance they are offered and seeing health insecurity as one of many pressing challenges with which they must cope. At 64%, this is the most predominantly female segment. Over 1 in 3 (36%) of the people in this segment are in socio-economic grades D and E ‐ 11% more than the population average. Their mean age is 48. 51% are in work, but only 27% have children in their household, which is 16% below the national average. Just over half (58%) of the Fearful and Overwhelmed are interested in information about the pandemic, but over a third of them find such information difficult to understand. They have lower than average trust in guidance offered to them by medical professionals or the mainstream media. A significant proportion of this segment turn to alternative online sites for information about the pandemic. They also have a high level of trust in faith groups compared to other segments. Three in 4 (73%) people within this segment comply with official guidance relating to the pandemic. We refer to the fourth segment as the Informed and Responsible (13% of the population). Members of this segment are inclined to weigh up any official advice that is given to them relating to the pandemic in accordance with their own experiences, sometimes challenging what they are being told. 60% are male. This group has a broad socio-economic distribution, with over half falling within socio-economic grades C1 and C2. Their mean age is 56. They have the highest proportion of segment members in work (72%) and having children in their household (63%). Only a minority (37%) of Informed and Responsible are interested in information about the pandemic, but most (55%) find the official guidance they do receive easy to understand. Members of this segment tend to trust medical professionals and, to a lesser extent, mainstream media, but they question what they are told and want to be able to verify facts for themselves. Most people in this segment (72%) comply with official advice relating to the pandemic. The fifth segment are the Nonchalant (15% of the population). Members of this segment tend to have no strong views about the official guidance they are offered. As pragmatists, they are prepared to make an effort but do not always see the point of making big life changes. Just over half (55%) are female. The distribution across socio-economic grades within this group reflects the population average. Their mean age is 43. Two thirds (67%) of them are in work (10% above the population average) and 44% have children living in their household. Four in 10 (39%) members of this segment are interested in receiving information about the pandemic, but fewer than 1 in 3 (28%) find such information easy to understand. In accessing information relating to the pandemic, they tend to move between mainstream media and alternative online sites, and they trust advice from medical professionals. One in 5 of the Nonchalant group trust information from faith groups. Half of the people in this segment comply with official advice; half do not. The sixth segment are the Unconcerned and Uncooperative (at 21% of the population, the largest segment). Members of this segment lead busy lives and do not want to be disturbed or held back by crisis conditions. Just over half (51%) are female. Over half are in socio-economic grades C1 and C2, making them quite close of the population average. This is the youngest of our segments, with a mean age of 36. Only half (51%) are in work and over two-thirds live in households without children. Three-quarters of the members of this segment have no interest in acquiring information about the pandemic and over a third (35%) say that they feel overwhelmed by official advice and do not feel sure whether it is correct. They have low trust in any source of external information and are more likely to trust friendship networks than mainstream media. Only 1 in 4 members of this segment comply with official advice. The seventh segment are the Skeptical Resisters (15% of the population). Members of this segment do not want to be told what to do. They outrightly resist official guidance and are happy to be considered social rebels. 53% of this group are female. Over half (55%) are in in socio-economic grades C1 and C2, with almost a third (31%) in grades D or E. Their mean age is 48. 54% are in work and 41% have children in their household. Just over half of this segment are interested in information about the pandemic, but over 1 in 3 (36%) find it difficult to comprehend. Members of this segment have the second lowest level of trust in medical professionals. They are very skeptical about the honesty or clarity of messages from politicians or experts. One in three people within this segment trust information about the pandemic that they find on alternative online health sites and a third trust faith groups. Only 1 in 4 people in this segment comply with official advice relating to the pandemic and they have far and away the lowest rate of vaccination of any segment. Comparing the segments on key variables In the tables that follow, for each variable profiled, scores were compared between each segment using a Z test for percentages and a t test for means and a .05 significance level for each test, as is standard practice in market research; the purpose is primarily to highlight differences that are worthy of attention. The focus in audience segmentation is interpreting the overall pattern of results, not (over)interpreting comparison of a few specific scores in isolation, given the number of comparisons made. Each column (corresponding to one of the segments) is assigned a letter so it is clear which segments appear to be different from each other, beginning with the largest mean or percentage, with comparison to the smaller means and percentages for that variable. Therefore, the smallest mean or percentage (or the smallest several, if they are not significantly different) will have no letter as there are no smaller cells with which to compare, and the larger cells should be examined to see which are different from the smaller cells. profiles the segments demographically. As one might expect given the relationship of age Covid-19 risk, segments vary quite a bit by average age. At the same time, people of the same age can be in segments that are dramatically different with respects to pandemic-related attitudes and behaviors, as is evident below. Definitions of socioeconomic segments are described in Methods. demonstrates the sometimes quite dramatic differences between segments with respect to vaccine compliance and cooperation regarding masking and social distancing in the two weeks prior to responding to the survey. In , we show segment differences with respect to some key attitudes characterizing each segment, which provide richer insight into what perspectives must be addressed when communicating with members of the segment. Differences by segment in media channels they rely on for pandemic information are summarized in , and differences in trust of information sources of Covid-19 guidance are summarized in . Understanding these differences is crucial in determining how best to reach each segment with public health information using the information channels they are most likely to actually use for such information and which channels they are most likely to trust. Patterns of information channel trust also provide insights into the degree of alienation or engagement with mainstream social influences such as physicians and national media sources. Validation: Comparing segments to demographic variables as predictors of protective behavior and vaccination refusal One way to demonstrate the power of this segmentation analysis is using predictive validity. Public health researchers tend to look carefully at demographic influences and demographic differences, for obvious reasons, when characterizing population health data. At least some demographic variables should be strongly predictive of key Covid-19 outcome measures, such as adoption of protective behaviours and vaccine refusal. After all, Covid-19 risks are closely associated with age ; education may well be associated with understanding the public health value of vaccination and protective behaviours ; and income may influence other risk factors such as the necessity of working in high-exposure settings . We can compare the segmentation scheme, which is simply one seven-level categorical variable, with these demographic variables, along with race/ethnicity and gender, with respect to prediction of protective behaviours and vaccine refusal (we note that this is a post-hoc analysis conducted in response to a reviewer query). We created a summative measure of self-reported protective behaviours engaged in over the past two weeks prior to responding to the survey. These included masking, social distancing, avoiding crowded indoor spaces, handwashing for at least 20 seconds, and opening a window for ventilation when indoors with others. Vaccine refusal was based on responding affirmatively to the statement “I haven’t had a vaccination and don’t intend to” when offered a series of statements characterizing respondents’ Covid-19 vaccination status. (At the time of the survey booster roll-out was not complete, so that outcome would have been confounded with availability of the booster to various parts of the population at that time.) These outcome measures, of course, were not included in the items used to create the segmentation scheme. We used GLM regressions to analyze prediction of the protective behaviours (Unianova) and generalized linear model logistic regression to examine prediction of vaccine refusal (SPSS version 29). For our summative measure of self-reported protective behaviors, the segmentation variable had a partial eta-squared of .145, which is generally considered a strong effect size . Age had a partial eta-squared of .015, considered a weak effect size. All the other demographic variables each had a nominal partial eta-squared under .01. In total, the demographic variables had a partial eta-squared of .035, less than one-quarter the explanatory power of the single segmentation variable (see ). Logistic regression, needed to test prediction of vaccine refusal, does not lend itself to effect size estimation, but comparison of Wald chi-square statistics can be informative. Gender, education, and race/ethnicity were not significantly predictive of vaccine refusal. The Wald chi-square (with one df ) was 19.5 for age and 34.5 for income. For the segmentation variable (with 6 df ) the Wald chi-square was 459.2. As can be seen from , this was clearly due to strongly disproportionate refusal among the Skeptical Resisters and to a lesser extent the Unconcerned and Uncooperative segments. Given the richness and complexity of these data, we summarize our observations about each of the segments below in a narrative form, based on our review of the profiling differences. This should provide an understanding of the segment names and labels, and thus make perusal of the tables detailing differences between segments more intelligible. We refer to our first segment as the Trusting Compliers (14% of the population). Members of this segment tend to follow official guidance and are able to do so without major loss or inconvenience to their everyday lifestyles. 56% are male. One in three (33%) are in socio-economic grades A and B, which is 10% more than the population average. Their mean age is 58 (10 years older than the average for the population). 62% are in work and 54% have children in their household. Trusting Compliers are more interested than any other segment in acquiring information about the pandemic. Most find such information easy to understand. They trust medical professionals to give them good advice and have strong trust in mainstream media, such as TV, radio and newspapers. Nine out of ten members of this segment (91%) report complying with official advice relating to the pandemic. The second segment are the Concerned Cooperators (14% of the population). Members of this segment try to do what is expected of them but are not always sure about what that official advice is or whether it can be trusted. The segment gender split (51% female) reflects the national average. Over half are in socio-economic grades C1 and C2, making them quite close of the population average. Their mean age is 54. 67% are in work and 55% have children in their household. Like the Trusting Compliers, the Concerned Cooperators are interested in acquiring information about the pandemic, but approximately two thirds of them do not find the guidance they are offered easy to understand. They tend to trust messages from the mainstream media, such as TV, radio and newspapers, and they trust medical professionals to give them good advice. 86% of the members of this segment comply with official advice relating to the pandemic. The third segment are the Fearful and Overwhelmed (13% of the population). These people tend to feel scared and lost, often confused by the guidance they are offered and seeing health insecurity as one of many pressing challenges with which they must cope. At 64%, this is the most predominantly female segment. Over 1 in 3 (36%) of the people in this segment are in socio-economic grades D and E ‐ 11% more than the population average. Their mean age is 48. 51% are in work, but only 27% have children in their household, which is 16% below the national average. Just over half (58%) of the Fearful and Overwhelmed are interested in information about the pandemic, but over a third of them find such information difficult to understand. They have lower than average trust in guidance offered to them by medical professionals or the mainstream media. A significant proportion of this segment turn to alternative online sites for information about the pandemic. They also have a high level of trust in faith groups compared to other segments. Three in 4 (73%) people within this segment comply with official guidance relating to the pandemic. We refer to the fourth segment as the Informed and Responsible (13% of the population). Members of this segment are inclined to weigh up any official advice that is given to them relating to the pandemic in accordance with their own experiences, sometimes challenging what they are being told. 60% are male. This group has a broad socio-economic distribution, with over half falling within socio-economic grades C1 and C2. Their mean age is 56. They have the highest proportion of segment members in work (72%) and having children in their household (63%). Only a minority (37%) of Informed and Responsible are interested in information about the pandemic, but most (55%) find the official guidance they do receive easy to understand. Members of this segment tend to trust medical professionals and, to a lesser extent, mainstream media, but they question what they are told and want to be able to verify facts for themselves. Most people in this segment (72%) comply with official advice relating to the pandemic. The fifth segment are the Nonchalant (15% of the population). Members of this segment tend to have no strong views about the official guidance they are offered. As pragmatists, they are prepared to make an effort but do not always see the point of making big life changes. Just over half (55%) are female. The distribution across socio-economic grades within this group reflects the population average. Their mean age is 43. Two thirds (67%) of them are in work (10% above the population average) and 44% have children living in their household. Four in 10 (39%) members of this segment are interested in receiving information about the pandemic, but fewer than 1 in 3 (28%) find such information easy to understand. In accessing information relating to the pandemic, they tend to move between mainstream media and alternative online sites, and they trust advice from medical professionals. One in 5 of the Nonchalant group trust information from faith groups. Half of the people in this segment comply with official advice; half do not. The sixth segment are the Unconcerned and Uncooperative (at 21% of the population, the largest segment). Members of this segment lead busy lives and do not want to be disturbed or held back by crisis conditions. Just over half (51%) are female. Over half are in socio-economic grades C1 and C2, making them quite close of the population average. This is the youngest of our segments, with a mean age of 36. Only half (51%) are in work and over two-thirds live in households without children. Three-quarters of the members of this segment have no interest in acquiring information about the pandemic and over a third (35%) say that they feel overwhelmed by official advice and do not feel sure whether it is correct. They have low trust in any source of external information and are more likely to trust friendship networks than mainstream media. Only 1 in 4 members of this segment comply with official advice. The seventh segment are the Skeptical Resisters (15% of the population). Members of this segment do not want to be told what to do. They outrightly resist official guidance and are happy to be considered social rebels. 53% of this group are female. Over half (55%) are in in socio-economic grades C1 and C2, with almost a third (31%) in grades D or E. Their mean age is 48. 54% are in work and 41% have children in their household. Just over half of this segment are interested in information about the pandemic, but over 1 in 3 (36%) find it difficult to comprehend. Members of this segment have the second lowest level of trust in medical professionals. They are very skeptical about the honesty or clarity of messages from politicians or experts. One in three people within this segment trust information about the pandemic that they find on alternative online health sites and a third trust faith groups. Only 1 in 4 people in this segment comply with official advice relating to the pandemic and they have far and away the lowest rate of vaccination of any segment. In the tables that follow, for each variable profiled, scores were compared between each segment using a Z test for percentages and a t test for means and a .05 significance level for each test, as is standard practice in market research; the purpose is primarily to highlight differences that are worthy of attention. The focus in audience segmentation is interpreting the overall pattern of results, not (over)interpreting comparison of a few specific scores in isolation, given the number of comparisons made. Each column (corresponding to one of the segments) is assigned a letter so it is clear which segments appear to be different from each other, beginning with the largest mean or percentage, with comparison to the smaller means and percentages for that variable. Therefore, the smallest mean or percentage (or the smallest several, if they are not significantly different) will have no letter as there are no smaller cells with which to compare, and the larger cells should be examined to see which are different from the smaller cells. profiles the segments demographically. As one might expect given the relationship of age Covid-19 risk, segments vary quite a bit by average age. At the same time, people of the same age can be in segments that are dramatically different with respects to pandemic-related attitudes and behaviors, as is evident below. Definitions of socioeconomic segments are described in Methods. demonstrates the sometimes quite dramatic differences between segments with respect to vaccine compliance and cooperation regarding masking and social distancing in the two weeks prior to responding to the survey. In , we show segment differences with respect to some key attitudes characterizing each segment, which provide richer insight into what perspectives must be addressed when communicating with members of the segment. Differences by segment in media channels they rely on for pandemic information are summarized in , and differences in trust of information sources of Covid-19 guidance are summarized in . Understanding these differences is crucial in determining how best to reach each segment with public health information using the information channels they are most likely to actually use for such information and which channels they are most likely to trust. Patterns of information channel trust also provide insights into the degree of alienation or engagement with mainstream social influences such as physicians and national media sources. One way to demonstrate the power of this segmentation analysis is using predictive validity. Public health researchers tend to look carefully at demographic influences and demographic differences, for obvious reasons, when characterizing population health data. At least some demographic variables should be strongly predictive of key Covid-19 outcome measures, such as adoption of protective behaviours and vaccine refusal. After all, Covid-19 risks are closely associated with age ; education may well be associated with understanding the public health value of vaccination and protective behaviours ; and income may influence other risk factors such as the necessity of working in high-exposure settings . We can compare the segmentation scheme, which is simply one seven-level categorical variable, with these demographic variables, along with race/ethnicity and gender, with respect to prediction of protective behaviours and vaccine refusal (we note that this is a post-hoc analysis conducted in response to a reviewer query). We created a summative measure of self-reported protective behaviours engaged in over the past two weeks prior to responding to the survey. These included masking, social distancing, avoiding crowded indoor spaces, handwashing for at least 20 seconds, and opening a window for ventilation when indoors with others. Vaccine refusal was based on responding affirmatively to the statement “I haven’t had a vaccination and don’t intend to” when offered a series of statements characterizing respondents’ Covid-19 vaccination status. (At the time of the survey booster roll-out was not complete, so that outcome would have been confounded with availability of the booster to various parts of the population at that time.) These outcome measures, of course, were not included in the items used to create the segmentation scheme. We used GLM regressions to analyze prediction of the protective behaviours (Unianova) and generalized linear model logistic regression to examine prediction of vaccine refusal (SPSS version 29). For our summative measure of self-reported protective behaviors, the segmentation variable had a partial eta-squared of .145, which is generally considered a strong effect size . Age had a partial eta-squared of .015, considered a weak effect size. All the other demographic variables each had a nominal partial eta-squared under .01. In total, the demographic variables had a partial eta-squared of .035, less than one-quarter the explanatory power of the single segmentation variable (see ). Logistic regression, needed to test prediction of vaccine refusal, does not lend itself to effect size estimation, but comparison of Wald chi-square statistics can be informative. Gender, education, and race/ethnicity were not significantly predictive of vaccine refusal. The Wald chi-square (with one df ) was 19.5 for age and 34.5 for income. For the segmentation variable (with 6 df ) the Wald chi-square was 459.2. As can be seen from , this was clearly due to strongly disproportionate refusal among the Skeptical Resisters and to a lesser extent the Unconcerned and Uncooperative segments. Segmentation analyses such as this are intended typically to guide communication efforts. A key value of this segmentation is that public health communicators can gear message design and channel strategies regarding the pandemic to specific groups within the population. New insights from this research regarding Covid audience segmentation As described earlier, prior research has either focused on specialized subpopulations , used limited items to create the segmentation that did not permit psychological profiling , or, in the most complete available segmentation studies, used a single theoretical framework along with trust items , or used a richer set of psychosocial items but lacked media and source trust data . These studies all focused on vaccination uptake with little or no information available on other compliance behaviors. As summarized above, the present study combines the rich psychosocial data of the Zhou et al. study , and even more extensive information on media and source use and trust as found in Thaker et al. , and richer data on compliance beyond vaccination alone. In so doing, we believe we generated segment insights unavailable from previous research. For example, we identified four very distinct groups of cooperators ‐ those who were well-resourced, seeking information, trusting reliable sources; those who were less well-resourced and struggling more with understanding guidance; and those who were fearful, confused, but willing to comply given their fears, and a fourth group that were also well-resourced, but were more skeptical, less trusting, though generally compliant. Among those who were less compliant, one group (with about half generally compliant with recommendations), seeming generally unengaged with the issue; a second and quite large segment was predominantly young people less at risk, more likely to trust friends and social media relative to traditional sources than other segments, with about 25% generally compliant; and a final segment distinguished by lack of trust in health authorities, government, dislike of authority ‐ and almost one-quarter of them have not been and do not intend to be vaccinated, compared to 8% for the youthful uncooperative segment and under 5% for each of the other segments. Interestingly, other forms of compliance are quite similar for the last two segments ‐ the last segment is uniquely resistant to vaccination in particular. Implications for public health managers and communicators regarding reaching these segments Our summary of the segments, above, lends itself to some preliminary suggestions regarding public health communication strategy. For example, as the Trusting Compliers are already responsive to existing messages and communication strategies, the most effective way to target them probably is to persist in providing clear, consistent messaging through mainstream media channels. While clearly a cooperative group, the Concerned Cooperators differ from the Trusting Compliers in finding it challenging to understand official guidance. For them there is a need to simplify and pre-test messaging and maximize its consistency. Like the first two segments, the Fearful/Overwhelmed tend to comply with official guidance, but they differ from them significantly in their levels of fear and anxiety. Interpersonal and community channels, such as worksites, unions and churches could have potential for reaching and reassuring this segment by providing interpersonal support while conveying the message. For the Informed and Responsible, an emphasis upon taking protective action as an act of civic solidarity may prove to be a good fit, as would links to in-depth online information that is open to scrutiny. The Nonchalant is an interesting segment, given its split on adherence to official guidance. Clarity and consistency of messaging is particularly important for them, with an emphasis upon relevance to their everyday life experiences. For example, members of this segment are more likely than others to have older parents, so emphasis upon the impact of behavioral choices upon others might be helpful. Being the most resistant to official guidance, the final two segments (the Unconcerned and Unsupportive and the Skeptical Resisters) are most in need of messages that are clear, consistent, and simple. For the Unconcerned and Unsupportive, supplementing appeals via the usual channels with information dissemination within organizations in which they live and work would fit with the trust that they place in interpersonal relationships. The Skepticals might seem to be inherently averse to any kind of official guidance, but some of this at least might be due to its unintelligibility, and that could be addressed through a greater focus upon clarity, simplicity and consistency. Lack of trust in virtually any communication channel is the biggest obstacle to reaching the Skeptical Resister. With the Unconcerned and Unsupportive, and Fearful and Overwhelmed, direct communication in interpersonal networks and organizations in which they live and work are likely to be most effective. Likewise, if possible, finding sources such as prominent individuals who the Unconcerned and Unsupportive find relatively trustworthy, and others for the Skeptical Resisters, who are willing publicly to make a case for at least some public health recommendations may prove helpful. Of course, the above recommendations can only be broadly strategic. We possess a wealth of data from our study that could enrich communication planning and guide the development of specific messages for various segments (available at the following link: to be inserted). In addition, there is a need for in-depth qualitative investigation of how members of these segments experience and think about the world. Particularly for public health communication purposes, it is readily possible to draft messages and channel strategies for each segment, and then pretest the messages and get feedback on the channel strategy from representatives of each segment. This is normally done by conducting discriminant analyses to identify a much smaller number of questions (typically between 20 and 30 items) that can correctly identify segment members with reasonable accuracy. This short instrument can be used to identify and recruit segment members for focus groups, interviews, and other qualitative and pretesting research. The development of this reduced set of items to identify segment members for qualitative and pretest research is currently under way. In addition to the specificity of the pandemic as a critical moment of health insecurity, we think that this segmentation can be read as a study of how a particular population (the English) form attitudes and make choices in response to a crisis that calls for their civic engagement. Unlike market segmentation analyses, which seek to split populations into sub-groups that can be ranked in terms of their value to sellers of goods and services, our aim was to find ways of appealing to peoples’ commonality as citizens constituting a mutually interdependent public. The Covid-19 pandemic was a classic case of all citizens being threatened by a common risk, albeit unevenly distributed in its effects, and it was only by finding ways of persuading them to act as a public, albeit differentiated by experiential and attitudinal differences, that security could be realized. In this sense, the pandemic could be seen not only as an immediate challenge for health communicators, but as an historical template for the challenges facing collective civic action in an age of public fragmentation. Methodologically, there are also important lessons here for public health officials and health researchers. A survey with a culturally diverse national or regional population may well run into the problems of courtesy or acquiescence bias for some subpopulations, as we did, which cause difficulties for person-centered analytic procedures. Researchers should be vigilant about reviewing their data for such patterns. One option, of course, is to drop such respondents from the analysis—but if so, many members of the social group involved will no longer be represented, and often, as in our case, this may be a minority population of important public health interest. An alternative approach used here is, for the sets of items reflecting such bias, to standardize those variables based on the respondent’s own mean, as described in the Methods section here. Limitations Finally, we acknowledge the limitations of this study. Segmentation is always a function of the items in a data set. The data we gathered for this segmentation reflected a wide variety of items originating from the government agency that initiated it, the academics who helped to design it, and the market research company that implemented it. This convergence of perspectives was a strength insofar as it attached the study to real-world priorities, but we recognize that a cost of this eclectic approach was that we were not primarily focused upon the use of validated academic measures and constructs in the instrument design. This has made it harder to relate our findings to the theoretical literature but may have been an advantage in that we were not bound by previous thinking in working towards creative solutions in this unique public health challenge. This study is also a snapshot of population attitudes and behaviors at a given time point in history. While we suspect the segments would likely reproduce at many points during the pandemic, specific levels of responses to attitude and behavior questions would have been likely to vary. We also acknowledge, as discussed above, that our analytic approach was made more challenging by the existence of a segment with acquiescence bias. The initial k means segmentation was relatively standard, carefully constructed on the basis of reliable factors. The subsequent one involved a state-of-the-art commercial segmentation effort which made greater use of interpretation and judgment in refining the segmentation scheme, as well as using respondent standardization where needed, to address the acquiescence bias. This resulted in a more refined, but less orthodox segmentation model, and provides a methodological approach that may prove valuable for other public health contexts in which culturally diverse populations prove to have distinctive response sets that must be addressed in a cluster analysis. Finally, there are a great many variables profiled in these analyses and a great many comparisons made between segments. While one can have reasonable confidence in overall patterns of results, given the risk of some spurious results over so many comparisons, we recommend caution in interpreting any one comparison in isolation. We conclude by noting that there is no one ‘right’ segmentation model that encapsulates the complete story. However, our combination of extensive psychosocial, behavioral, source and media trust, and compliance data we believe provides a range of insights unavailable from previous Covid segmentation studies and that may provide guidance for segmentation studies for future public health emergencies. As described earlier, prior research has either focused on specialized subpopulations , used limited items to create the segmentation that did not permit psychological profiling , or, in the most complete available segmentation studies, used a single theoretical framework along with trust items , or used a richer set of psychosocial items but lacked media and source trust data . These studies all focused on vaccination uptake with little or no information available on other compliance behaviors. As summarized above, the present study combines the rich psychosocial data of the Zhou et al. study , and even more extensive information on media and source use and trust as found in Thaker et al. , and richer data on compliance beyond vaccination alone. In so doing, we believe we generated segment insights unavailable from previous research. For example, we identified four very distinct groups of cooperators ‐ those who were well-resourced, seeking information, trusting reliable sources; those who were less well-resourced and struggling more with understanding guidance; and those who were fearful, confused, but willing to comply given their fears, and a fourth group that were also well-resourced, but were more skeptical, less trusting, though generally compliant. Among those who were less compliant, one group (with about half generally compliant with recommendations), seeming generally unengaged with the issue; a second and quite large segment was predominantly young people less at risk, more likely to trust friends and social media relative to traditional sources than other segments, with about 25% generally compliant; and a final segment distinguished by lack of trust in health authorities, government, dislike of authority ‐ and almost one-quarter of them have not been and do not intend to be vaccinated, compared to 8% for the youthful uncooperative segment and under 5% for each of the other segments. Interestingly, other forms of compliance are quite similar for the last two segments ‐ the last segment is uniquely resistant to vaccination in particular. Our summary of the segments, above, lends itself to some preliminary suggestions regarding public health communication strategy. For example, as the Trusting Compliers are already responsive to existing messages and communication strategies, the most effective way to target them probably is to persist in providing clear, consistent messaging through mainstream media channels. While clearly a cooperative group, the Concerned Cooperators differ from the Trusting Compliers in finding it challenging to understand official guidance. For them there is a need to simplify and pre-test messaging and maximize its consistency. Like the first two segments, the Fearful/Overwhelmed tend to comply with official guidance, but they differ from them significantly in their levels of fear and anxiety. Interpersonal and community channels, such as worksites, unions and churches could have potential for reaching and reassuring this segment by providing interpersonal support while conveying the message. For the Informed and Responsible, an emphasis upon taking protective action as an act of civic solidarity may prove to be a good fit, as would links to in-depth online information that is open to scrutiny. The Nonchalant is an interesting segment, given its split on adherence to official guidance. Clarity and consistency of messaging is particularly important for them, with an emphasis upon relevance to their everyday life experiences. For example, members of this segment are more likely than others to have older parents, so emphasis upon the impact of behavioral choices upon others might be helpful. Being the most resistant to official guidance, the final two segments (the Unconcerned and Unsupportive and the Skeptical Resisters) are most in need of messages that are clear, consistent, and simple. For the Unconcerned and Unsupportive, supplementing appeals via the usual channels with information dissemination within organizations in which they live and work would fit with the trust that they place in interpersonal relationships. The Skepticals might seem to be inherently averse to any kind of official guidance, but some of this at least might be due to its unintelligibility, and that could be addressed through a greater focus upon clarity, simplicity and consistency. Lack of trust in virtually any communication channel is the biggest obstacle to reaching the Skeptical Resister. With the Unconcerned and Unsupportive, and Fearful and Overwhelmed, direct communication in interpersonal networks and organizations in which they live and work are likely to be most effective. Likewise, if possible, finding sources such as prominent individuals who the Unconcerned and Unsupportive find relatively trustworthy, and others for the Skeptical Resisters, who are willing publicly to make a case for at least some public health recommendations may prove helpful. Of course, the above recommendations can only be broadly strategic. We possess a wealth of data from our study that could enrich communication planning and guide the development of specific messages for various segments (available at the following link: to be inserted). In addition, there is a need for in-depth qualitative investigation of how members of these segments experience and think about the world. Particularly for public health communication purposes, it is readily possible to draft messages and channel strategies for each segment, and then pretest the messages and get feedback on the channel strategy from representatives of each segment. This is normally done by conducting discriminant analyses to identify a much smaller number of questions (typically between 20 and 30 items) that can correctly identify segment members with reasonable accuracy. This short instrument can be used to identify and recruit segment members for focus groups, interviews, and other qualitative and pretesting research. The development of this reduced set of items to identify segment members for qualitative and pretest research is currently under way. In addition to the specificity of the pandemic as a critical moment of health insecurity, we think that this segmentation can be read as a study of how a particular population (the English) form attitudes and make choices in response to a crisis that calls for their civic engagement. Unlike market segmentation analyses, which seek to split populations into sub-groups that can be ranked in terms of their value to sellers of goods and services, our aim was to find ways of appealing to peoples’ commonality as citizens constituting a mutually interdependent public. The Covid-19 pandemic was a classic case of all citizens being threatened by a common risk, albeit unevenly distributed in its effects, and it was only by finding ways of persuading them to act as a public, albeit differentiated by experiential and attitudinal differences, that security could be realized. In this sense, the pandemic could be seen not only as an immediate challenge for health communicators, but as an historical template for the challenges facing collective civic action in an age of public fragmentation. Methodologically, there are also important lessons here for public health officials and health researchers. A survey with a culturally diverse national or regional population may well run into the problems of courtesy or acquiescence bias for some subpopulations, as we did, which cause difficulties for person-centered analytic procedures. Researchers should be vigilant about reviewing their data for such patterns. One option, of course, is to drop such respondents from the analysis—but if so, many members of the social group involved will no longer be represented, and often, as in our case, this may be a minority population of important public health interest. An alternative approach used here is, for the sets of items reflecting such bias, to standardize those variables based on the respondent’s own mean, as described in the Methods section here. Finally, we acknowledge the limitations of this study. Segmentation is always a function of the items in a data set. The data we gathered for this segmentation reflected a wide variety of items originating from the government agency that initiated it, the academics who helped to design it, and the market research company that implemented it. This convergence of perspectives was a strength insofar as it attached the study to real-world priorities, but we recognize that a cost of this eclectic approach was that we were not primarily focused upon the use of validated academic measures and constructs in the instrument design. This has made it harder to relate our findings to the theoretical literature but may have been an advantage in that we were not bound by previous thinking in working towards creative solutions in this unique public health challenge. This study is also a snapshot of population attitudes and behaviors at a given time point in history. While we suspect the segments would likely reproduce at many points during the pandemic, specific levels of responses to attitude and behavior questions would have been likely to vary. We also acknowledge, as discussed above, that our analytic approach was made more challenging by the existence of a segment with acquiescence bias. The initial k means segmentation was relatively standard, carefully constructed on the basis of reliable factors. The subsequent one involved a state-of-the-art commercial segmentation effort which made greater use of interpretation and judgment in refining the segmentation scheme, as well as using respondent standardization where needed, to address the acquiescence bias. This resulted in a more refined, but less orthodox segmentation model, and provides a methodological approach that may prove valuable for other public health contexts in which culturally diverse populations prove to have distinctive response sets that must be addressed in a cluster analysis. Finally, there are a great many variables profiled in these analyses and a great many comparisons made between segments. While one can have reasonable confidence in overall patterns of results, given the risk of some spurious results over so many comparisons, we recommend caution in interpreting any one comparison in isolation. We conclude by noting that there is no one ‘right’ segmentation model that encapsulates the complete story. However, our combination of extensive psychosocial, behavioral, source and media trust, and compliance data we believe provides a range of insights unavailable from previous Covid segmentation studies and that may provide guidance for segmentation studies for future public health emergencies. S1 Appendix Multidimensional scaling for preliminary assessment of segment interpretability. (ZIP) Click here for additional data file. S2 Appendix Factors used to create segmentation and items comprising them. (ODS) Click here for additional data file. |
Comorbidity and multimorbidity in patients with cirrhosis, hospitalised in an internal medicine ward: a monocentric, cross-sectional study | 07c9dbcf-6131-4e21-b3d7-76534541283f | 11086508 | Internal Medicine[mh] | Clinical complexity is one of the most challenging issues of modern medicine, especially in internal medicine, and it originates from the interaction between the patient’s own factors and other external, but contextual, factors. Its fundamental attributes are represented by interconnectedness, non-linearity, context-sensitivity and unpredictability. Among the most important determinants of clinical complexity, the association of multiple chronic conditions within the same patient is certainly one of the most relevant, and for some years multiple chronic conditions and clinical complexity have been identified in each other. However, subsequent studies have demonstrated that clinical complexity is something more and different compared with the mere disease associations, and it includes both biological (ie, ageing, multiple chronic conditions, frailty, mental impairment, malnutrition, dependency) and non-biological (ie, socioeconomic, cultural, environmental, behavioural) variables. Further, multiple chronic conditions can be split into two important clinical categories, namely comorbidity, which indicates the combined effects of additional conditions in reference to an index disease under study and multimorbidity, which indicates the mere co-occurrence of multiple diseases within the same individual, in which no single disease holds priority. The distinction between comorbidity and multimorbidity may translate into substantial differences in the pathways of care. Among various end-stage organ failure, liver cirrhosis is an example of clinical complexity and of systemic condition. To mention a few disease-related manifestations, ascites, hepatic encephalopathy, cell blood count alterations, coagulopathy and gastrointestinal bleeding, all have a negative impact on both physical and mental functioning. Additionally, patients with cirrhosis frequently have multiple chronic conditions, although their impact on prognosis remains unclear, and despite a distinction between comorbidity and multimorbidity has never been assessed. Besides its biological complexity, the impact of socioeconomic factors, that is, education, marital and employment status, household income, is an additional detrimental factor the effects of which appear to vary according to disease aetiology, and to have a relevant impact on survival and overall patients’ management. In particular, different networks and trajectories of disease association might be noticed according to the specific aetiology of cirrhosis, such as chronic viral hepatitis (hepatitis B virus (HBV), hepatitis C virus (HCV) related), alcoholic liver disease, autoimmune liver disease and non-alcoholic fatty liver disease (NAFLD). On these bases, we sought to analyse a population of patients with cirrhosis admitted to an internal medicine ward, in order to highlight whether any difference exists in the rate of comorbidity, multimorbidity and other determinants of clinical complexity in relation to patients’ characteristics and to the specific aetiology of liver cirrhosis. Study population For the purpose of this paper, data from the San MAtteo Complexity (SMAC) study were used. The SMAC study is a large ongoing prospective research project regarding clinical complexity (NCT03439410) conducted at our Institution (IRCCS San Matteo Hospital Foundation, University of Pavia, Pavia, Italy). The primary aim of the SMAC study is the validation of a tool for assessing clinical complexity in hospitalised patients. Several sociodemographic and clinical characteristics were collected, including age, sex, socioeconomic status, cause of admission, polypharmacy and major health outcomes (ie, in-hospital death, hospital readmissions, death at follow-up). Specifically, adult patients (age >18 years) admitted to our internal medicine ward, regardless of the cause, were consecutively enrolled from November 2017 to November 2019 by trained physicians and by a research nurse. All patients’ data were collected by the trained researchers, to avoid potential biases. Terminally ill patients with an expected prognosis of less than 48 hours and denial of informed consent were the only exclusion criteria. The telephone follow-up, scheduled every 4 months for the first year after discharge and yearly thereafter for up to 5 years, is still ongoing. Selection of patients with cirrhosis In this study, which is a subanalysis of the SMAC study, among all enrolled patients (n=1433), we selected those with a clinical diagnosis of liver cirrhosis according to the International Classification of Diseases-9 codes (ie, 571, 571.2, 571.5, 571.6, 571.4, 571.40, 571.41, 571.49, 571.8 and 571.9). Hence, this is a cross-sectional study, in which we used data from a single time point (ie, the time of discharge of the patient). Also, the discharge letter of each patient with cirrhosis was reviewed for confirming the aetiology of the disease, according to internationally recognised guidelines and recommendations. Among all causes of cirrhosis, we categorised patients as having alcohol, viral (either by HBV and/or HCV infection) or NAFLD cirrhosis. Patients with undetermined causes of cirrhosis or with rare causes of cirrhosis (eg, autoimmune liver disease, sclerosing cholangitis and others) were excluded. In the case of multiple aetiologies, we selected either the leading or the more lasting cause of liver injury. Liver cirrhosis was diagnosed on the basis of clinical features, laboratory characteristics, imaging (abdominal ultrasound, liver fibroscan) and liver biopsy (when available). Alcohol cirrhosis was diagnosed when a history of persistent alcohol consumption/abuse was ascertained while the diagnosis of viral hepatitis relied on serology. NAFLD cirrhosis was diagnosed when all other causes of cirrhosis were ruled out, and other clear metabolic alterations were present (ie, obesity/overweight, dyslipidaemia, oral glucose intolerance or diabetes mellitus type II); in some cases, the diagnosis was also confirmed by biopsy. Definition of comorbidity and multimorbidity Considering its clinical features and the progressive disease course, liver cirrhosis could ideally represent a model of comorbidity or multimorbidity, both encompassing the concept of multiple chronic conditions. In this regard, recently standardised definitions for comorbidity and multimorbidity have been introduced to distinguish patients in the context of multiple chronic conditions. As already stated, comorbidity indicates the combined effects of additional conditions in reference to an index disease under study, whereas multimorbidity indicates the mere co-occurrence of multiple diseases within the same individual, in which no single disease holds priority. Accordingly, specific novel Medical Subject Heading (MeSH) definitions have been released for indexing purposes. Following these definitions, all our patients have been categorised as having either comorbidity or multimorbidity by an expert physician who reviewed all patients’ discharge letters. For example, patients having only complications of liver cirrhosis (namely cirrhosis decompensation, gastrointestinal bleeding, hepatic encephalopathy, ascites) have been categorised as being comorbid (ie, all these conditions are dependent on liver cirrhosis, which is therefore the index disease), while patients with association with other clinically relevant conditions (eg, a patient with liver cirrhosis, ischaemic heart disease, diabetes mellitus type II and chronic kidney failure) have been categorised as having multimorbidity. Outcomes and variables As a primary aim, we looked at the rates of comorbidity or multimorbidity and other possible determinants of clinical complexity in patients with cirrhosis, compared with the whole SMAC cohort. As a secondary aim, we compared the rate of comorbidity and multimorbidity according to the aetiology of liver cirrhosis, as well as other potential determinants of clinical complexity, including sex, body mass index (BMI), schooling (categorised into <8 or ≥8, which is the legal number of compulsory education), income (categorised into <€1000/month or ≥€1000/month), Cumulative Illness Rating Scale (CIRS) comorbidity e severity index, Edmonton Frail Scale (a score >5 indicates being frail), Barthel index (a score <60 indicates dependency), Short Blessed Test (a score >9 indicated cognitive impairment), length of stay (LOS). The causes of admission to hospital were categorised as either related or unrelated to liver cirrhosis and were included in the multivariable analysis. Finally, we sought to determine the factors affecting the risk of having multimorbidity according to the aetiology. Statistical analysis Continuous data were described with the median and IQR and compared with the Mann-Whitney U test or the Kruskall-Wallis test. Categorical data were reported as counts and per cent and compared with the Fisher’s exact test. Based on clinical considerations, we chose a priori a series of candidate variables, which were considered the most relevant patient clinical characteristics according to the aetiology of cirrhosis. These were checked for collinearity and were included in a logistic multivariable model. For descriptive purposes, the univariable analysis of the candidate variables was also performed. The area under the model receiver operating characteristic (ROC) curve was computed as a measure of model performance. The model calibration was assessed graphically using the calibration plot and the corresponding statistic test was computed. We did not formally calculate the sample size for this substudy, as all patients from the SMAC registry were included. However, given the overall sample of 172 patients with cirrhosis with 36 patients with comorbidity, we would be able to fit a multivariable model with up to about four predictors without overfitting, according to the 1:10 predictors to event rule. A posteriori the good calibration of our model with 6 df was assessed, as described above. The software Stata V.17 (StataCorp) was used for all computations. The study follows the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) recommendations for reporting. Patient and public involvement None. For the purpose of this paper, data from the San MAtteo Complexity (SMAC) study were used. The SMAC study is a large ongoing prospective research project regarding clinical complexity (NCT03439410) conducted at our Institution (IRCCS San Matteo Hospital Foundation, University of Pavia, Pavia, Italy). The primary aim of the SMAC study is the validation of a tool for assessing clinical complexity in hospitalised patients. Several sociodemographic and clinical characteristics were collected, including age, sex, socioeconomic status, cause of admission, polypharmacy and major health outcomes (ie, in-hospital death, hospital readmissions, death at follow-up). Specifically, adult patients (age >18 years) admitted to our internal medicine ward, regardless of the cause, were consecutively enrolled from November 2017 to November 2019 by trained physicians and by a research nurse. All patients’ data were collected by the trained researchers, to avoid potential biases. Terminally ill patients with an expected prognosis of less than 48 hours and denial of informed consent were the only exclusion criteria. The telephone follow-up, scheduled every 4 months for the first year after discharge and yearly thereafter for up to 5 years, is still ongoing. In this study, which is a subanalysis of the SMAC study, among all enrolled patients (n=1433), we selected those with a clinical diagnosis of liver cirrhosis according to the International Classification of Diseases-9 codes (ie, 571, 571.2, 571.5, 571.6, 571.4, 571.40, 571.41, 571.49, 571.8 and 571.9). Hence, this is a cross-sectional study, in which we used data from a single time point (ie, the time of discharge of the patient). Also, the discharge letter of each patient with cirrhosis was reviewed for confirming the aetiology of the disease, according to internationally recognised guidelines and recommendations. Among all causes of cirrhosis, we categorised patients as having alcohol, viral (either by HBV and/or HCV infection) or NAFLD cirrhosis. Patients with undetermined causes of cirrhosis or with rare causes of cirrhosis (eg, autoimmune liver disease, sclerosing cholangitis and others) were excluded. In the case of multiple aetiologies, we selected either the leading or the more lasting cause of liver injury. Liver cirrhosis was diagnosed on the basis of clinical features, laboratory characteristics, imaging (abdominal ultrasound, liver fibroscan) and liver biopsy (when available). Alcohol cirrhosis was diagnosed when a history of persistent alcohol consumption/abuse was ascertained while the diagnosis of viral hepatitis relied on serology. NAFLD cirrhosis was diagnosed when all other causes of cirrhosis were ruled out, and other clear metabolic alterations were present (ie, obesity/overweight, dyslipidaemia, oral glucose intolerance or diabetes mellitus type II); in some cases, the diagnosis was also confirmed by biopsy. Considering its clinical features and the progressive disease course, liver cirrhosis could ideally represent a model of comorbidity or multimorbidity, both encompassing the concept of multiple chronic conditions. In this regard, recently standardised definitions for comorbidity and multimorbidity have been introduced to distinguish patients in the context of multiple chronic conditions. As already stated, comorbidity indicates the combined effects of additional conditions in reference to an index disease under study, whereas multimorbidity indicates the mere co-occurrence of multiple diseases within the same individual, in which no single disease holds priority. Accordingly, specific novel Medical Subject Heading (MeSH) definitions have been released for indexing purposes. Following these definitions, all our patients have been categorised as having either comorbidity or multimorbidity by an expert physician who reviewed all patients’ discharge letters. For example, patients having only complications of liver cirrhosis (namely cirrhosis decompensation, gastrointestinal bleeding, hepatic encephalopathy, ascites) have been categorised as being comorbid (ie, all these conditions are dependent on liver cirrhosis, which is therefore the index disease), while patients with association with other clinically relevant conditions (eg, a patient with liver cirrhosis, ischaemic heart disease, diabetes mellitus type II and chronic kidney failure) have been categorised as having multimorbidity. As a primary aim, we looked at the rates of comorbidity or multimorbidity and other possible determinants of clinical complexity in patients with cirrhosis, compared with the whole SMAC cohort. As a secondary aim, we compared the rate of comorbidity and multimorbidity according to the aetiology of liver cirrhosis, as well as other potential determinants of clinical complexity, including sex, body mass index (BMI), schooling (categorised into <8 or ≥8, which is the legal number of compulsory education), income (categorised into <€1000/month or ≥€1000/month), Cumulative Illness Rating Scale (CIRS) comorbidity e severity index, Edmonton Frail Scale (a score >5 indicates being frail), Barthel index (a score <60 indicates dependency), Short Blessed Test (a score >9 indicated cognitive impairment), length of stay (LOS). The causes of admission to hospital were categorised as either related or unrelated to liver cirrhosis and were included in the multivariable analysis. Finally, we sought to determine the factors affecting the risk of having multimorbidity according to the aetiology. Continuous data were described with the median and IQR and compared with the Mann-Whitney U test or the Kruskall-Wallis test. Categorical data were reported as counts and per cent and compared with the Fisher’s exact test. Based on clinical considerations, we chose a priori a series of candidate variables, which were considered the most relevant patient clinical characteristics according to the aetiology of cirrhosis. These were checked for collinearity and were included in a logistic multivariable model. For descriptive purposes, the univariable analysis of the candidate variables was also performed. The area under the model receiver operating characteristic (ROC) curve was computed as a measure of model performance. The model calibration was assessed graphically using the calibration plot and the corresponding statistic test was computed. We did not formally calculate the sample size for this substudy, as all patients from the SMAC registry were included. However, given the overall sample of 172 patients with cirrhosis with 36 patients with comorbidity, we would be able to fit a multivariable model with up to about four predictors without overfitting, according to the 1:10 predictors to event rule. A posteriori the good calibration of our model with 6 df was assessed, as described above. The software Stata V.17 (StataCorp) was used for all computations. The study follows the STrengthening the Reporting of OBservational studies in Epidemiology (STROBE) recommendations for reporting. None. reports the baseline characteristics of the entire cohort of 172 patients with cirrhosis (median age 79 years, IQR 67–84; 83 females) compared with the other 1261 patients (median age 80 years, IQR 70–86; 685 females) included in the SMAC study. Patients with cirrhosis displayed higher CIRS comorbidity (4, IQR 3–5, p=0.01) and severity (1.85, IQR 1.6–2.0, p<0.001) indexes and lower educational level (103, 59.9%, p=0.002). No other significantly different results were noticed for sex, nutritional status, frailty, dependency, cognitive impairment, income and living alone. reports the main demographic and clinical characteristics of patients with liver cirrhosis according to their aetiologies. Notably, we found that patients with alcohol cirrhosis were significantly younger (median age 65 years, IQR 56–79) and more commonly males (25, 75.8%) than patients with cirrhosis of other aetiologies (p<0.001). Further, BMI was significantly higher (27.1, IQR 23.7–31.8) in patients with NAFLD cirrhosis (p<0.001). No differences among groups were noticed in terms of CIRS comorbidity and severity indexes, frailty, dependency, cognitive impairment, living alone, schooling and LOS. Regarding comorbidity and multimorbidity, we found a significant (p=0.015) difference in their prevalence among the three liver aetiologies under study (p=0.015). Particularly, comorbidity was more prevalent in patients with alcohol cirrhosis (13, 39.4%) while multimorbidity was more prevalent in viral (64, 81.0%) and NAFLD (52, 86.7%) cirrhosis. Finally, in a multivariable model , we found that a CIRS comorbidity index >3 (OR 2.81, 95% CI 1.14 to 6.93, p=0.024) was significantly correlated with having multimorbidity. On the contrary, admission related to cirrhosis (OR 0.19, 95% CI 0.07 to 0.54, p=0.002) was inversely correlated with the presence of multimorbidity. shows the good calibration of the model while shows the univariable analysis of the candidate variables. 10.1136/bmjopen-2023-077576.supp1 Supplementary data We here found some important differences regarding baseline clinical characteristics of patients with cirrhosis compared with the whole cohort of patients hospitalised in an academic, internal medicine ward. In particular, patients with cirrhosis had even greater CIRS indexes (comorbidity and severity) and higher rates of comorbidity and multimorbidity, as well as a lower educational level, despite being similarly frail and dependent, and had a similarly impaired cognitive function. These latter results were not unexpected, considering that our controls were similarly old (median age 80 years vs 78) and hospitalised. In a similar large, prospective and multicentre study, although including only patients greater than 65 years old, enrolled in internal medicine and geriatric wards, among 6193 patients, liver cirrhosis was found in 315 (5%); of these, 43% were multimorbid, 44% had cognitive impairment and 51% were disabled. This study is the first in which a distinction between comorbidity and multimorbidity in a population of hospitalised patients with a specific chronic disease was performed. Indeed, previous studies have analysed the presence of multiple chronic conditions in patients with liver disease, but the term ‘comorbidity’ has been used with a different meaning, outside the current MeSH definition. In these studies, it was evident that patients with cirrhosis suffered from many other disorders, but they have not been identified as either a consequence of cirrhosis itself or its aetiological factor (ie, comorbidities) or as separate entities (ie, multimorbidity). Regarding differences among cirrhosis aetiologies in our study, we found that viral (median age 81 years, IQR 77–85) and NAFLD (median age 78 years, IQR 65–82) patients with cirrhosis were significantly older than alcohol cirrhosis patients (median age 65 years, IQR 56–79), as already demonstrated in other studies which, however, were conducted in completely different settings (eg, population level or specialty settings). This translates into a higher rate of multimorbidity—that we actually found—possibly due to the stochastic accumulation of different disorders with advanced age. Conversely, in patients with alcohol cirrhosis, the higher rate of comorbidity could be interpreted as a direct consequence of alcohol abuse which is a strong and well-known risk factor for multiple organ involvement, often underlying a common psychopathological basis. Additionally, in the alcohol cirrhosis group, we found a clear male predominance, while in the other groups, there was not a prominent difference with regard to biological sex, and this is consistent with previous reports. Of note, although a higher prevalence of alcoholic cirrhosis in male patients is expected, the gap in alcohol consumption between men and women has been progressively narrowing over the last years. Admission related to cirrhosis was found to be inversely related to the presence of multimorbidity while CIRS was directly related to multimorbidity. These correlations represent a counterproof of the validity of the classification applied for categorising patients as having either comorbidity or multimorbidity. For example, a patient with cirrhosis and many other randomly associated multiple chronic conditions (multimorbid) would be more likely to be admitted to hospital due to one of these many multiple chronic conditions compared with a patient with cirrhosis and its classical comorbidities, such as ascites, gastrointestinal bleeding or encephalopathy (comorbid). It is not surprising that, according to a recent expert consensus, the evaluation of socioeconomic factors, educational status and comorbid psychiatric illness should all be taken into account by a multidisciplinary team in alcohol cirrhosis patients. In fact, a low educational level was found to be common in our alcohol cirrhosis patients, and interventions aimed at improving one’s knowledge of the disease may translate into a therapeutic advantage. We are aware that our study has some limitations that should be mentioned. The sample size was rather small, especially for some cirrhosis aetiologies (eg, autoimmune liver disease) so we had to exclude these patients from our analysis. Hence, a wider multivariable analysis could not be made. Even if our data should be considered as preliminary in this field, a distinction between comorbidity and multimorbidity could potentially aid decision-making in patients with cirrhosis, in whom a prioritisation of the clinical problems to be solved is mandatory. Also, our data should be interpreted in the light of the specific setting of enrolment, in which patients admitted are usually older than others. Hence, our data cannot be generalised to other settings, like that of the population level or the primary care. Nevertheless, this study had some strengths, including a prospective collection of data, not administrative based, but collected during the hospitalisation by a dedicated and qualified staff of healthcare professionals who had been instructed before study commencement. Conclusion To conclude, we have performed the first study focusing on the distinction of comorbidity and multimorbidity in a cohort of patients with a specific chronic condition. We found that patients with alcoholic cirrhosis had a high comorbidity rate, while the other aetiologies—viral and NAFLD—were mostly multimorbid due to ageing. How these characteristics may translate into distinct and personalised clinical management should be further investigated. To conclude, we have performed the first study focusing on the distinction of comorbidity and multimorbidity in a cohort of patients with a specific chronic condition. We found that patients with alcoholic cirrhosis had a high comorbidity rate, while the other aetiologies—viral and NAFLD—were mostly multimorbid due to ageing. How these characteristics may translate into distinct and personalised clinical management should be further investigated. Reviewer comments Author's manuscript |
Olfactory impairment in psychiatric disorders: Does nasal inflammation impact disease psychophysiology? | ae4348ad-ca0f-4c2c-86b7-58c02334fa60 | 9352903 | Physiology[mh] | Alterations in multiple sensory modalities, including auditory, visual, and olfactory processing, have been reported in patients with psychiatric disorders, such as schizophrenia and depression, and these deficits may underlie complex cognitive dysfunctions . While pathophysiological mechanisms in the auditory and visual systems have been actively investigated , the current pandemic of Coronavirus Disease 2019 (COVID-19) highlights that chronic olfactory deficits that may impact brain function and mental health are an important and timely research topic for understanding the complex pathophysiology of psychiatric disorders. Evolutionarily, the olfactory system is a crucial sensory modality, essential for survival behaviors and behavioral adaptation upon detecting odor cues . Odor information is initially perceived by olfactory sensory neurons (OSNs) in the olfactory epithelium (OE) inside the nasal cavity and is transmitted to the olfactory system, which is comprised of the olfactory bulb (OB) and primary olfactory cortices, including the anterior olfactory nucleus (AON) and the piriform cortex (Pir) . Recent neuroscience research has uncovered neural connections between the olfactory system and higher cerebral cortices, including the medial prefrontal cortex (mPFC) and orbitofrontal cortex (OFC), which are associated with higher brain functions such as cognition, memory, motivation, and emotion (Fig. ). Accumulating evidence suggests that olfactory impairments are involved in the pathology of Alzheimer’s disease and Parkinson’s disease . In addition to neurodegenerative diseases, there is also compelling evidence that olfactory impairments are implicated in psychiatric disorders . Impairment of odor discrimination associated with schizophrenia was initially reported in 1988 . Since then, many studies reported that olfactory performance, demonstrated through measures of odor identification, odor discrimination, and/or odor detection, is impaired in patients with schizophrenia, psychosis, and depression . More recently, pathological changes in the olfactory system, including a reduction in OB volume, aberrant functional connectivity among brain regions critical for olfactory processing, and neuronal and molecular changes in the OE have also been found in patients with schizophrenia and psychosis . OB volume loss has also been observed in patients with depression . Olfactory deficits are associated with negative symptoms, impaired social and cognitive functioning, and depressive symptoms . Furthermore, olfactory dysfunction may be a significant pathological hallmark in the early stages of disease progression that include first episode psychosis . Although these results support the pathological implication of olfactory impairments in psychiatric disorders, how olfactory dysfunction affects the neural mechanisms underpinning higher brain functions remains poorly understood. There is a large body of clinical evidence indicating that inflammatory processes are involved in the pathophysiology of schizophrenia and depression . Complementary findings from preclinical studies highlight aberrant systemic and brain immune systems as potential mechanisms underlying neuroinflammation that lead to behavioral outcomes relevant to these psychiatric disorders . This notion is supported by epidemiological findings that environmental factors such as air pollutants and viral infections contribute to the risk for psychiatric diseases including schizophrenia and depression , perhaps at least in part via nasal inflammatory mechanisms. However, much less is known about the specific pathological role of OE inflammation for neurobehavioral consequences and disease pathophysiology. In this article, we will first give an overview of published findings on how nasal inflammation impacts the olfactory system. We will also summarize clinical evidence on olfactory impairment in psychiatric disorders, with a particular focus on schizophrenia and depression. We will then provide an overview of preclinical studies on the neurobehavioral outcomes produced by olfactory dysfunction. Finally, we will discuss the potential impact of OE inflammation on brain development and function, as well as in disease-associated mechanisms, which may contribute to understanding the importance of OE inflammation in the pathophysiology of psychiatric diseases. Anatomy of the OE and OB The OE is located at the dorsal and posterior portion of the nasal cavity, variably extending inferiorly along the nasal septum and turbinates . Histologically, the pseudostratified OE is thicker than the respiratory epithelium and lacks motile cilia. The apical sustentacular cells surround the cell bodies and dendritic projections of mature OSNs. Underneath this layer are immature neurons and progenitor cells including horizontal basal cells (HBCs). The half-life of OSNs is about 90 days, and the OE has a remarkable capacity for regeneration, with normal turnover of OSNs through globose basal cell proliferation and differentiation . As a result of severe OE damage, quiescent HBCs become active and can differentiate into not only OSNs, but also non-neuronal cells, such as sustentacular cells, globose basal cells, and Bowman’s gland cells, regenerating the entire neuroepithelium . Bipolar OSNs have sensory cilia that extend from dendritic knobs into the nasal cavity. These OSNs have axonal projections that cross through the foramina of the cribriform plate of the ethmoid bone to reach the glomeruli of the OB. Newly regenerated OSNs express a given odorant receptor and precisely project onto discrete glomeruli of the OB that contain axons expressing the same odorant receptor . The OB projection neurons (mitral and tufted cells) then relay olfactory sensory information to primary olfactory cortices such as the AON and Pir (Fig. ). Nasal inflammation impacting the OE and OB OE inflammation has been extensively studied in chronic rhinosinusitis, which is a common heterogeneous inflammatory condition of unknown origin. Regardless of whether it results from an external trigger (e.g., environmental allergens, irritants, air pollutants, or microbes) or an underlying intrinsic immune dysregulation, sinonasal inflammation causes symptoms of nasal congestion, drainage, and, in many cases, a diminished sense of smell. Most likely, a reduction in airflow plays an important role in olfactory loss as there is decreased conduction of odorants to the olfactory cleft. However, there is also a significant sensorineural component to olfactory loss that is not completely understood. This is at least in part because chronic inflammation damages the olfactory neuroepithelium and inhibits its regeneration . The cellular and molecular mechanisms underlying air pollutant-induced nasal inflammation have begun to be investigated, mainly with a focus on the effect of particulate matter (PM) . For instance, PM-treated human nasal epithelial cells or tissue samples exhibited an elevation of pro-inflammatory molecules (such as tumor necrosis factor (TNF), interleukin 1β (IL-1β), interleukin 6 (IL-6), and interleukin 8 (IL-8)), a transition of macrophages to a pro-inflammatory state, and disrupted epithelial barrier function due to a reduction of tight-junction proteins . Consistent with these findings, an accumulation of immune cells such as macrophages, neutrophils, and eosinophils, an elevated expression of IL-1β, interleukin 13 (IL-13), and eotaxin-1, and reduced expression of tight-junction proteins, such as claudin-1 and epithelial cadherin, are observed in the sinonasal tissue of the PM-exposed mice . Respiratory viral infections, such as influenza virus and coronavirus, induce olfactory inflammation . Preclinical studies also suggest that some of these infections induce central nervous system (CNS) inflammation or have access to the CNS via an olfactory route . In addition, recent studies on COVID-19, which is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), have illustrated that it may lead to nasal inflammation and olfactory dysfunction . The molecular and cellular mechanisms underlying the effect of SARS-CoV-2 infection on the olfactory system and olfactory sensory loss have recently begun to be investigated . The lack of expression of viral entry factors, such as Angiotensin Converting Enzyme-2 (ACE2), suggests that OSNs may not be direct targets for SARS-CoV-2 . Instead, sustentacular cells, HBCs, and Bowman’s gland cells, which do express viral entry proteins , may be a reservoir of viral replication, which could result in OE cellular damage and inflammation, leading to disruption of OSN function . However, preclinical studies demonstrate that when nasally administered, the S1 subunit of the SARS-CoV-2 spike protein spreads to multiple brain areas with the highest entry into the OB , indicating OSNs possibly being the first route for viral invasion to the brains . It is conceivable that OE inflammation impacts central olfactory neural structures. Both chronic rhinosinusitis as well as neurodegenerative and psychiatric diseases are found to involve OB volume loss . Furthermore, recent evidence suggests that chronic rhinosinusitis is associated with an increased risk of psychiatric symptoms such as depression, anxiety, and cognitive dysfunction . In order to investigate how nasal inflammation impacts OE function, a genetic mouse model of inducible olfactory inflammation (IOI) has been developed . In this Tet-on system, by crossing Tet-response element (TRE)-Tumor Necrosis Factor (TNF) mice with the cyp2g1-reverse tetracycline transactivator (rtTA) line, one can drive expression of TNF specifically in OE sustentacular cells, inducing chronic and local OE inflammation in a temporally controlled manner . Using this mouse model, we have previously reported on the critical role of HBCs as direct participants in the progression of chronic OE inflammation and have identified a concomitant Nuclear Factor κB (NF-κB)-mediated functional switch away from OSN reproduction . In another mouse model, Hasegawa-Ishii et al. demonstrated that chronic nasal inflammation induced by intranasal lipopolysaccharide (LPS) administration induces loss of OSNs and results in neuroinflammation, gross atrophy of the OB, and loss of synaptic contacts onto tufted cells, more severe than mitral cells . Glial activation and pro-inflammatory cytokine expression were proposed by the authors to contribute to OB atrophy. Since OSN axons contribute to the olfactory nerve layer (ONL) and glomerular layer (GL), but not the external plexiform layer (EPL), shrinkage of the superficial layers of the OB and recovery from nasal inflammation-induced atrophy may not be entirely explained by OSN loss. The EPL is comprised of secondary dendrites of projection neurons, including mitral and tufted cells, which synapse with granule cell dendrites. EPL gliosis induced by the intranasal LPS administration model is implicated as a contributing factor to atrophy . However, evidence also suggests that the lack of sensory inputs due to OSN loss results in EPL shrinkage both in the LPS model and in odor deprivation models relating to retraction of dendrites of OB projection neurons. The impact of the loss of odor-signaling inputs may not be limited to the OB, as the AON also shrinks in response to odor deprivation and semilunar cells of the Pir undergo apoptosis . Further study is required to examine whether nasal inflammation-induced OB circuitry changes affect cortical areas receiving inputs from mitral and tufted cells. The OE is located at the dorsal and posterior portion of the nasal cavity, variably extending inferiorly along the nasal septum and turbinates . Histologically, the pseudostratified OE is thicker than the respiratory epithelium and lacks motile cilia. The apical sustentacular cells surround the cell bodies and dendritic projections of mature OSNs. Underneath this layer are immature neurons and progenitor cells including horizontal basal cells (HBCs). The half-life of OSNs is about 90 days, and the OE has a remarkable capacity for regeneration, with normal turnover of OSNs through globose basal cell proliferation and differentiation . As a result of severe OE damage, quiescent HBCs become active and can differentiate into not only OSNs, but also non-neuronal cells, such as sustentacular cells, globose basal cells, and Bowman’s gland cells, regenerating the entire neuroepithelium . Bipolar OSNs have sensory cilia that extend from dendritic knobs into the nasal cavity. These OSNs have axonal projections that cross through the foramina of the cribriform plate of the ethmoid bone to reach the glomeruli of the OB. Newly regenerated OSNs express a given odorant receptor and precisely project onto discrete glomeruli of the OB that contain axons expressing the same odorant receptor . The OB projection neurons (mitral and tufted cells) then relay olfactory sensory information to primary olfactory cortices such as the AON and Pir (Fig. ). OE inflammation has been extensively studied in chronic rhinosinusitis, which is a common heterogeneous inflammatory condition of unknown origin. Regardless of whether it results from an external trigger (e.g., environmental allergens, irritants, air pollutants, or microbes) or an underlying intrinsic immune dysregulation, sinonasal inflammation causes symptoms of nasal congestion, drainage, and, in many cases, a diminished sense of smell. Most likely, a reduction in airflow plays an important role in olfactory loss as there is decreased conduction of odorants to the olfactory cleft. However, there is also a significant sensorineural component to olfactory loss that is not completely understood. This is at least in part because chronic inflammation damages the olfactory neuroepithelium and inhibits its regeneration . The cellular and molecular mechanisms underlying air pollutant-induced nasal inflammation have begun to be investigated, mainly with a focus on the effect of particulate matter (PM) . For instance, PM-treated human nasal epithelial cells or tissue samples exhibited an elevation of pro-inflammatory molecules (such as tumor necrosis factor (TNF), interleukin 1β (IL-1β), interleukin 6 (IL-6), and interleukin 8 (IL-8)), a transition of macrophages to a pro-inflammatory state, and disrupted epithelial barrier function due to a reduction of tight-junction proteins . Consistent with these findings, an accumulation of immune cells such as macrophages, neutrophils, and eosinophils, an elevated expression of IL-1β, interleukin 13 (IL-13), and eotaxin-1, and reduced expression of tight-junction proteins, such as claudin-1 and epithelial cadherin, are observed in the sinonasal tissue of the PM-exposed mice . Respiratory viral infections, such as influenza virus and coronavirus, induce olfactory inflammation . Preclinical studies also suggest that some of these infections induce central nervous system (CNS) inflammation or have access to the CNS via an olfactory route . In addition, recent studies on COVID-19, which is caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), have illustrated that it may lead to nasal inflammation and olfactory dysfunction . The molecular and cellular mechanisms underlying the effect of SARS-CoV-2 infection on the olfactory system and olfactory sensory loss have recently begun to be investigated . The lack of expression of viral entry factors, such as Angiotensin Converting Enzyme-2 (ACE2), suggests that OSNs may not be direct targets for SARS-CoV-2 . Instead, sustentacular cells, HBCs, and Bowman’s gland cells, which do express viral entry proteins , may be a reservoir of viral replication, which could result in OE cellular damage and inflammation, leading to disruption of OSN function . However, preclinical studies demonstrate that when nasally administered, the S1 subunit of the SARS-CoV-2 spike protein spreads to multiple brain areas with the highest entry into the OB , indicating OSNs possibly being the first route for viral invasion to the brains . It is conceivable that OE inflammation impacts central olfactory neural structures. Both chronic rhinosinusitis as well as neurodegenerative and psychiatric diseases are found to involve OB volume loss . Furthermore, recent evidence suggests that chronic rhinosinusitis is associated with an increased risk of psychiatric symptoms such as depression, anxiety, and cognitive dysfunction . In order to investigate how nasal inflammation impacts OE function, a genetic mouse model of inducible olfactory inflammation (IOI) has been developed . In this Tet-on system, by crossing Tet-response element (TRE)-Tumor Necrosis Factor (TNF) mice with the cyp2g1-reverse tetracycline transactivator (rtTA) line, one can drive expression of TNF specifically in OE sustentacular cells, inducing chronic and local OE inflammation in a temporally controlled manner . Using this mouse model, we have previously reported on the critical role of HBCs as direct participants in the progression of chronic OE inflammation and have identified a concomitant Nuclear Factor κB (NF-κB)-mediated functional switch away from OSN reproduction . In another mouse model, Hasegawa-Ishii et al. demonstrated that chronic nasal inflammation induced by intranasal lipopolysaccharide (LPS) administration induces loss of OSNs and results in neuroinflammation, gross atrophy of the OB, and loss of synaptic contacts onto tufted cells, more severe than mitral cells . Glial activation and pro-inflammatory cytokine expression were proposed by the authors to contribute to OB atrophy. Since OSN axons contribute to the olfactory nerve layer (ONL) and glomerular layer (GL), but not the external plexiform layer (EPL), shrinkage of the superficial layers of the OB and recovery from nasal inflammation-induced atrophy may not be entirely explained by OSN loss. The EPL is comprised of secondary dendrites of projection neurons, including mitral and tufted cells, which synapse with granule cell dendrites. EPL gliosis induced by the intranasal LPS administration model is implicated as a contributing factor to atrophy . However, evidence also suggests that the lack of sensory inputs due to OSN loss results in EPL shrinkage both in the LPS model and in odor deprivation models relating to retraction of dendrites of OB projection neurons. The impact of the loss of odor-signaling inputs may not be limited to the OB, as the AON also shrinks in response to odor deprivation and semilunar cells of the Pir undergo apoptosis . Further study is required to examine whether nasal inflammation-induced OB circuitry changes affect cortical areas receiving inputs from mitral and tufted cells. Olfactory pathology in schizophrenia Olfactory impairments have gained growing interest in psychotic disorders, such as schizophrenia . Substantial evidence from neuropsychological studies reveals several facets of olfactory dysfunction, such as impaired odor identification and odor discrimination, in individuals with schizophrenia, early onset psychosis, or high risk for psychosis . In contrast, findings related to odor detection threshold are mixed and incongruent in patients with schizophrenia . Previous studies have shown that these olfactory deficits are largely independent from the effects of cigarette smoking and psychiatric medications . However, a recent meta-analysis has shown that heavy smoking is paradoxically associated with smaller deficits in olfactory function in patients with schizophrenia . This study also revealed that previous studies in which patients were on a regimen of first-generation antipsychotics showed greater olfactory deficits than those in which patients were treated with second-generation antipsychotics . It is also worth noting that although some patients with schizophrenia experience olfactory hallucinations, several studies have demonstrated no association between olfactory hallucinations and olfactory impairments , suggesting that these phenotypes may be mediated by different neural mechanisms. Accumulating evidence does indicate that various olfactory deficits are associated with negative symptoms and impaired social and cognitive function in patients with schizophrenia and early psychosis . For instance, many studies have consistently reported that impaired odor identification is associated with negative symptoms and social dysfunction in patients with schizophrenia . Some studies have also revealed an association between odor identification deficits and cognitive impairments . Furthermore, impaired odor discrimination has recently been shown to be associated with negative symptoms and cognitive impairments in patients with schizophrenia . Several studies have reported that impaired odor identification is associated with negative symptoms in patients with early psychosis as well . A few studies have also revealed an association between reduced olfactory discrimination and negative symptoms or cognitive impairments in first-episode psychosis . Furthermore, two studies have shown that odor identification deficits are associated with negative symptoms in high risk individuals . One of these studies also showed an association between impaired olfactory identification and cognitive impairments in this population . A prospective cohort study reported that although there were no differences in olfactory identification at baseline or follow-up between high risk individuals who transitioned to psychosis and those who did not, individuals with poor functional outcomes showed significantly lower baseline olfactory identification than those with good outcomes . The authors concluded that impaired olfactory identification may be a useful marker to distinguish high risk individuals who may experience a poor functional outcome, regardless of transition status. Neuroimaging studies reveal structural and functional abnormalities in various brain regions involved in olfactory processing in schizophrenia . These brain regions include the OB and primary olfactory cortices, which have an underappreciated neural connectivity with higher brain regions such as the mPFC and OFC which regulate higher-order information processing . In addition, recent studies have revealed reduced OB volume in patients with first episode psychosis compared to healthy controls . Interestingly, physiological and brain abnormalities in the olfactory system are also observed in young high-risk individuals who developed schizophrenia, as well as their first degree relatives . This finding suggests that assessing olfactory functioning may provide an early pathological sign and biological marker in the disease progression, including during the prodromal stages before the onset of schizophrenia, which typically occur in early adulthood. While the pathological implication of olfactory impairment for schizophrenia has been extensively studied, the specific role of nasal inflammation in the developmental course of olfactory dysfunction is underexamined. The OE is the peripheral region of the olfactory system in the nasal cavity, where OSNs are directly exposed to the environmental stimuli . Several studies have reported neuronal and molecular changes in the OE of patients with schizophrenia. In a pioneering study using immunohistochemical approaches, Arnold and colleagues reported a reduction in p75 nerve growth factor receptor (p75NGFR)-positive basal cells and an increase in growth-associated protein 43 (GAP43)-positive immature OSNs in the postmortem OE tissue of patients with schizophrenia, suggesting altered developmental composition of the OSNs . More recently, using the OE collected by nasal biopsy from schizophrenia patients, microarray expression studies have revealed that differential expression of molecules in the small-mothers-against-decapentaplegic (SMAD) pathway, which is involved in inflammatory processes regulated by transforming growth factor-β (TGF-β) , is associated with cognitive impairments . In this study, the number of cigarettes smoked per day was evaluated as an independent variable in linear regression analysis, which revealed no significant effect of smoking on gene expression in the OE samples, suggesting that smoking did not account for these trends. Furthermore, recent RNA sequencing-based molecular expression studies reported that certain molecular pathways involved in the immune/inflammatory system, such as the NF-κB signaling pathway, are altered in olfactory neuronal cells produced from the OE, and that these changes are correlated with OB volume in patients with first episode psychosis . While the exact molecular and cellular mechanisms behind these abnormalities are unknown, these pathological phenotypes may be produced by OE inflammation. Human OE-derived cell/tissue models could be a promising system to address these questions . Respiratory viral infections (e.g., influenza virus and coronavirus) induce olfactory inflammation . Preclinical studies suggest that some of these infections induce CNS inflammation or have access to the CNS via an olfactory route . Epidemiological evidence indicates that not only maternal infection, but also childhood and adulthood infection, increase the risk of developing schizophrenia . Recent retrospective cohort studies showed a bidirectional association between diagnoses of COVID-19 and psychiatric disorders, including psychosis and depression . Other large cohort studies have reported high psychiatric complications in patients with COVID-19, including psychosis and cognitive dysfunctions . Overall, although a causal link between nasal viral infections and schizophrenia risk remains elusive, the nasal inflammation produced by respiratory viral infections may contribute to olfactory impairments in the psychopathology of schizophrenia. In addition, recent prospective cohort studies support air pollution in childhood as an environmental factor for increased risk for schizophrenia . For instance, a ten-year follow-up study in a Denmark cohort reported that childhood nitrogen dioxide (NO 2 ) exposure is associated with an increased risk of developing schizophrenia . Another study following a UK cohort for 18 years from birth reveals an association between NOx exposure and increased psychotic experience during adolescence . A one year follow-up study found that exposure to air pollution during childhood is particularly associated with impairment of attention, memory, and learning abilities . Given that adolescence is a vulnerable period in which environmental stimuli can alter PFC maturation, as well as a critical time for the emergence of onset of schizophrenia in adulthood , the potential impact of adolescent exposure to air pollutants and viral infection on brain maturation and longitudinal schizophrenia risk also warrant further investigation. Altogether, this clinical evidence supports the intriguing hypothesis set out in this review that nasal inflammation induced by developmental exposure to these environmental factors may alter neural circuit maturation in the olfactory system and perhaps in even higher cerebral cortex areas involved in the regulation of higher brain functions that are relevant to schizophrenia. Olfactory pathology in depression Olfactory impairment is also implicated in depression. One recent study systematically analyzed a reciprocal relationship between olfaction and depression that has been reported in previous studies . The study demonstrated that olfactory function, including olfactory threshold, discrimination, and identification, is impaired in depressive patients, and, further, that patients with olfactory impairments experience worse depressive symptoms with greater severity of smell loss . Correspondingly, evidence also suggests that the degree of olfactory impairment varies depending on duration and course of depression . Thus, it is of interest to investigate whether nasal inflammation may contribute to the pathophysiology of depression via olfactory dysfunction. Respiratory viral infections and air pollutants are potential environmental risk factors not only for schizophrenia and psychosis, but also for depression. For instance, a population-based study showed that people with a previous influenza infection had an increased risk of developing depression . Similarly, COVID-19 infections are associated with increased rates of newly recognized depression . Previous studies have also demonstrated the pathological implication of air pollutants (i.e., particulate matter 2.5 [PM 2.5 ]) in depressive patients . In the context of gene-environment interaction, PM 2.5 exposure is involved in altered cortical neural circuit networks in patients with depression . The pathological implication of nasal inflammation is also supported by evidence that chronic rhinosinusitis is associated with an increased risk of depression . Allergic rhinitis is induced by IgE-mediated reactions to inhaled allergens, which produce an inflammatory reaction in the nasal mucosa . Epidemiological evidence suggests that a history of seasonal allergies confers a heightened risk of depression . Furthermore, a population-based prospective case-control study showed that allergic rhinitis during adolescence increases the risk of depression . These results indicate that nasal inflammation produced by various factors (e.g., viral infection, air pollutants, and chronic and allergic rhinitis) may contribute to a greater risk for depression. Studies have reported reduction of OB volume in patients with major depressive disorder, as well as negative correlation between OB volume and depression scores . Olfactory and emotional processing are regulated by shared neural circuits and brain regions such as the amygdala that receive sensory information from the OB , supporting the notion that olfactory impairments may be involved in the pathophysiology of depression . Nevertheless, given that these results are produced from cross-sectional studies with relatively small cohorts, further studies with larger sample sizes and longitudinal designs are required. Histochemical and molecular phenotyping of the OE in depressive patients is also areas of future investigation. Olfactory impairments have gained growing interest in psychotic disorders, such as schizophrenia . Substantial evidence from neuropsychological studies reveals several facets of olfactory dysfunction, such as impaired odor identification and odor discrimination, in individuals with schizophrenia, early onset psychosis, or high risk for psychosis . In contrast, findings related to odor detection threshold are mixed and incongruent in patients with schizophrenia . Previous studies have shown that these olfactory deficits are largely independent from the effects of cigarette smoking and psychiatric medications . However, a recent meta-analysis has shown that heavy smoking is paradoxically associated with smaller deficits in olfactory function in patients with schizophrenia . This study also revealed that previous studies in which patients were on a regimen of first-generation antipsychotics showed greater olfactory deficits than those in which patients were treated with second-generation antipsychotics . It is also worth noting that although some patients with schizophrenia experience olfactory hallucinations, several studies have demonstrated no association between olfactory hallucinations and olfactory impairments , suggesting that these phenotypes may be mediated by different neural mechanisms. Accumulating evidence does indicate that various olfactory deficits are associated with negative symptoms and impaired social and cognitive function in patients with schizophrenia and early psychosis . For instance, many studies have consistently reported that impaired odor identification is associated with negative symptoms and social dysfunction in patients with schizophrenia . Some studies have also revealed an association between odor identification deficits and cognitive impairments . Furthermore, impaired odor discrimination has recently been shown to be associated with negative symptoms and cognitive impairments in patients with schizophrenia . Several studies have reported that impaired odor identification is associated with negative symptoms in patients with early psychosis as well . A few studies have also revealed an association between reduced olfactory discrimination and negative symptoms or cognitive impairments in first-episode psychosis . Furthermore, two studies have shown that odor identification deficits are associated with negative symptoms in high risk individuals . One of these studies also showed an association between impaired olfactory identification and cognitive impairments in this population . A prospective cohort study reported that although there were no differences in olfactory identification at baseline or follow-up between high risk individuals who transitioned to psychosis and those who did not, individuals with poor functional outcomes showed significantly lower baseline olfactory identification than those with good outcomes . The authors concluded that impaired olfactory identification may be a useful marker to distinguish high risk individuals who may experience a poor functional outcome, regardless of transition status. Neuroimaging studies reveal structural and functional abnormalities in various brain regions involved in olfactory processing in schizophrenia . These brain regions include the OB and primary olfactory cortices, which have an underappreciated neural connectivity with higher brain regions such as the mPFC and OFC which regulate higher-order information processing . In addition, recent studies have revealed reduced OB volume in patients with first episode psychosis compared to healthy controls . Interestingly, physiological and brain abnormalities in the olfactory system are also observed in young high-risk individuals who developed schizophrenia, as well as their first degree relatives . This finding suggests that assessing olfactory functioning may provide an early pathological sign and biological marker in the disease progression, including during the prodromal stages before the onset of schizophrenia, which typically occur in early adulthood. While the pathological implication of olfactory impairment for schizophrenia has been extensively studied, the specific role of nasal inflammation in the developmental course of olfactory dysfunction is underexamined. The OE is the peripheral region of the olfactory system in the nasal cavity, where OSNs are directly exposed to the environmental stimuli . Several studies have reported neuronal and molecular changes in the OE of patients with schizophrenia. In a pioneering study using immunohistochemical approaches, Arnold and colleagues reported a reduction in p75 nerve growth factor receptor (p75NGFR)-positive basal cells and an increase in growth-associated protein 43 (GAP43)-positive immature OSNs in the postmortem OE tissue of patients with schizophrenia, suggesting altered developmental composition of the OSNs . More recently, using the OE collected by nasal biopsy from schizophrenia patients, microarray expression studies have revealed that differential expression of molecules in the small-mothers-against-decapentaplegic (SMAD) pathway, which is involved in inflammatory processes regulated by transforming growth factor-β (TGF-β) , is associated with cognitive impairments . In this study, the number of cigarettes smoked per day was evaluated as an independent variable in linear regression analysis, which revealed no significant effect of smoking on gene expression in the OE samples, suggesting that smoking did not account for these trends. Furthermore, recent RNA sequencing-based molecular expression studies reported that certain molecular pathways involved in the immune/inflammatory system, such as the NF-κB signaling pathway, are altered in olfactory neuronal cells produced from the OE, and that these changes are correlated with OB volume in patients with first episode psychosis . While the exact molecular and cellular mechanisms behind these abnormalities are unknown, these pathological phenotypes may be produced by OE inflammation. Human OE-derived cell/tissue models could be a promising system to address these questions . Respiratory viral infections (e.g., influenza virus and coronavirus) induce olfactory inflammation . Preclinical studies suggest that some of these infections induce CNS inflammation or have access to the CNS via an olfactory route . Epidemiological evidence indicates that not only maternal infection, but also childhood and adulthood infection, increase the risk of developing schizophrenia . Recent retrospective cohort studies showed a bidirectional association between diagnoses of COVID-19 and psychiatric disorders, including psychosis and depression . Other large cohort studies have reported high psychiatric complications in patients with COVID-19, including psychosis and cognitive dysfunctions . Overall, although a causal link between nasal viral infections and schizophrenia risk remains elusive, the nasal inflammation produced by respiratory viral infections may contribute to olfactory impairments in the psychopathology of schizophrenia. In addition, recent prospective cohort studies support air pollution in childhood as an environmental factor for increased risk for schizophrenia . For instance, a ten-year follow-up study in a Denmark cohort reported that childhood nitrogen dioxide (NO 2 ) exposure is associated with an increased risk of developing schizophrenia . Another study following a UK cohort for 18 years from birth reveals an association between NOx exposure and increased psychotic experience during adolescence . A one year follow-up study found that exposure to air pollution during childhood is particularly associated with impairment of attention, memory, and learning abilities . Given that adolescence is a vulnerable period in which environmental stimuli can alter PFC maturation, as well as a critical time for the emergence of onset of schizophrenia in adulthood , the potential impact of adolescent exposure to air pollutants and viral infection on brain maturation and longitudinal schizophrenia risk also warrant further investigation. Altogether, this clinical evidence supports the intriguing hypothesis set out in this review that nasal inflammation induced by developmental exposure to these environmental factors may alter neural circuit maturation in the olfactory system and perhaps in even higher cerebral cortex areas involved in the regulation of higher brain functions that are relevant to schizophrenia. Olfactory impairment is also implicated in depression. One recent study systematically analyzed a reciprocal relationship between olfaction and depression that has been reported in previous studies . The study demonstrated that olfactory function, including olfactory threshold, discrimination, and identification, is impaired in depressive patients, and, further, that patients with olfactory impairments experience worse depressive symptoms with greater severity of smell loss . Correspondingly, evidence also suggests that the degree of olfactory impairment varies depending on duration and course of depression . Thus, it is of interest to investigate whether nasal inflammation may contribute to the pathophysiology of depression via olfactory dysfunction. Respiratory viral infections and air pollutants are potential environmental risk factors not only for schizophrenia and psychosis, but also for depression. For instance, a population-based study showed that people with a previous influenza infection had an increased risk of developing depression . Similarly, COVID-19 infections are associated with increased rates of newly recognized depression . Previous studies have also demonstrated the pathological implication of air pollutants (i.e., particulate matter 2.5 [PM 2.5 ]) in depressive patients . In the context of gene-environment interaction, PM 2.5 exposure is involved in altered cortical neural circuit networks in patients with depression . The pathological implication of nasal inflammation is also supported by evidence that chronic rhinosinusitis is associated with an increased risk of depression . Allergic rhinitis is induced by IgE-mediated reactions to inhaled allergens, which produce an inflammatory reaction in the nasal mucosa . Epidemiological evidence suggests that a history of seasonal allergies confers a heightened risk of depression . Furthermore, a population-based prospective case-control study showed that allergic rhinitis during adolescence increases the risk of depression . These results indicate that nasal inflammation produced by various factors (e.g., viral infection, air pollutants, and chronic and allergic rhinitis) may contribute to a greater risk for depression. Studies have reported reduction of OB volume in patients with major depressive disorder, as well as negative correlation between OB volume and depression scores . Olfactory and emotional processing are regulated by shared neural circuits and brain regions such as the amygdala that receive sensory information from the OB , supporting the notion that olfactory impairments may be involved in the pathophysiology of depression . Nevertheless, given that these results are produced from cross-sectional studies with relatively small cohorts, further studies with larger sample sizes and longitudinal designs are required. Histochemical and molecular phenotyping of the OE in depressive patients is also areas of future investigation. Neurobehavioral outcomes produced by olfactory dysfunction Behavioral changes caused by olfactory dysfunction are observed in multiple rodent models (Table ). Surgical removal of the OB in mice and rats, namely olfactory bulbectomy, results in hyperactivity, altered sleep patterns, aberrant stress-induced coping responses, and abnormalities in various cognitive functions such as spatial memory performance, recognition memory, motivational behavior, and fear learning . Given that these phenotypes include non-odor-guided behaviors (e.g., fear learning), olfactory bulbectomy does not solely impair olfaction, but also affects higher-order cognitive processing regulated by the primary olfactory cortices and downstream brain regions that include the hippocampus, amygdala, mPFC, and OFC. Indeed, optogenetic stimulation of OSNs induces rhythmic activity in the OB and mPFC and disruption of olfactory input impairs the neural activity of the mPFC . Behavioral outcomes have also been elicited by genetic inhibition of olfactory function. Genetic deletion of cyclic nucleotide gated channel subunit alpha 2 ( Cnga2 ), which is essential for regulating odorant signal transduction, resulted in impaired spatial memory and social behaviors, as well as anxiety-like behavior . OSN-specific overexpression of M71 odorant receptors produced aberrant odor-evoked neural activity, leading to octanal-induced seizures and anxiety-like phenotypes . Genetic deletion of the recombination activation gene ( Rag-1 ), which is expressed not only in the lymphoid cells but also in OSNs, caused disorganized glomeruli structure in the OB, impaired odor-sensing, and anxiety-like behavior . Furthermore, a pharmacological olfactory lesion induced by an intraperitoneal injection of methimazole and an infusion of tetrodotoxin into the OB resulted in increased freezing behaviors during retrieval of conditioned fear . Differences in observed behavioral phenotypes in these rodent models may be explained by different experimental approaches used to impair olfactory function. Although these results reinforce the importance of the underappreciated olfactory pathways informing regulation of higher brain function, the underlying molecular and neural circuit mechanisms remain obscure. It should be noted that rodents largely depend on olfaction for sensing the external world—more than humans do—and that their performance in specific behavioral phenotypes such as social behaviors, heavily depends on olfaction. By examining the behavioral impact of manipulating specific olfactory-prefrontal neural circuits such as the OB-AON-mPFC and OB-Pir-OFC pathways, rodent models may help us to identify neural substrates involved in olfactory modulation of higher brain functions. In particular, it is of interest to examine the roles of olfactory-prefrontal circuits in the modulation of behavioral outcomes in positive valence systems, social processes, and cognitive systems: areas in which disturbances are associated with olfactory impairments in psychiatric disorders . OE inflammation as an entry point causing pathology of psychiatric disorders In order to examine whether nasal inflammation-induced disturbance in the peripheral olfactory system indeed causes behavioral alterations of neuropsychiatric relevance, we have recently investigated the adverse effects of chronic and local OE inflammation on behavioral consequences using the aforementioned inducible olfactory inflammation (IOI) mouse model . While no abnormalities in locomotion and anxiety phenotype were observed, IOI mice exhibited impairment of sociability and preference for social novelty in the three-chamber social interaction test, suggesting that chronic OE inflammation impairs social behaviors that highly rely on olfactory cues in rodents . The sucrose preference test was also used to characterize a loss of consummatory pleasure, which is a component of anhedonia . IOI mice exhibited a lower preference for the sucrose solution compared to controls, suggesting that OE inflammation may dampen consummatory pleasure. Considering that olfactory deficits are correlated with social and cognitive abnormalities, as well as negative symptoms in psychosis , relevant behavioral domains in this mouse model warrant future investigation. In addition, rodent models of exposure to environmental risk factors for psychosis such as air pollutants and microbial infections displayed OE inflammation and OB volume loss . Adolescent mice chronically exposed to air pollutants showed learning and memory deficits . Chronic exposure to ozone has also been shown to impair olfactory perception and social recognition memory in rats . The importance of OE inflammation’s impact on higher brain function is also supported by rodent models of chronic rhinosinusitis. These models produce olfactory impairments, OSN loss, OB volume reduction, OB-mPFC circuit disruption, and behavioral abnormalities including altered social behaviors . These results, together with the epidemiological evidence described above, suggest that the OE may be an entry point for the deleterious effect of inflammatory environmental factors such as air pollutants and viral infection on the CNS. The question arises of how local OE inflammation contributes to neural dysfunction underlying behavioral phenotypes relevant to psychiatric disorders. There are several possibilities to be explored for the mechanistic routes of OE inflammation impacting the CNS: 1) OSN dysfunction induced by OE inflammation resulting in abnormalities in the structural or functional composition of the OB; 2) OE inflammation spreading to the CNS through the OB; and/or 3) direct translocation of virus and air pollutants to the olfactory system. These changes may contribute to disturbance of olfactory-prefrontal circuits, leading to neurobehavioral consequences relevant to psychiatric symptoms (Fig. ). As described above, olfactory deficits are observed early in disease progression, including during the prodromal stages . Adolescence is a critical period for both PFC maturation and the prodromal process of psychiatric disorders and the peripheral olfactory system displays a high degree of plasticity . Thus, it is worth investigating whether OE inflammation-induced molecular and neural alterations during developmental periods such as adolescence may lead to long-lasting behavioral abnormalities in adulthood. The IOI mouse model may be useful in investigating this possibility, as it allows us to temporally control OE-specific expression of TNF, provoking local and chronic OE inflammation . One can envision that OSN dysfunction may be a major contributory factor mediating the adverse neurobehavioral effects of OE inflammation. Crossing TRE-inward rectifier potassium channel transgenic mice with the olfactory marker protein (Omp) promoter-driven tTA line ( Omp-tTA;TRE-Kir2.1 ) will allow us to determine whether non-inflammatory OSN dysfunction negatively impacts olfactory-prefrontal circuits and resultant behaviors. Given that genetic risk factors play an important role in the etiological complexities of psychiatric diseases, it is also crucial to explore the convergent mechanisms of genetic risk factors and nasal inflammation. Behavioral changes caused by olfactory dysfunction are observed in multiple rodent models (Table ). Surgical removal of the OB in mice and rats, namely olfactory bulbectomy, results in hyperactivity, altered sleep patterns, aberrant stress-induced coping responses, and abnormalities in various cognitive functions such as spatial memory performance, recognition memory, motivational behavior, and fear learning . Given that these phenotypes include non-odor-guided behaviors (e.g., fear learning), olfactory bulbectomy does not solely impair olfaction, but also affects higher-order cognitive processing regulated by the primary olfactory cortices and downstream brain regions that include the hippocampus, amygdala, mPFC, and OFC. Indeed, optogenetic stimulation of OSNs induces rhythmic activity in the OB and mPFC and disruption of olfactory input impairs the neural activity of the mPFC . Behavioral outcomes have also been elicited by genetic inhibition of olfactory function. Genetic deletion of cyclic nucleotide gated channel subunit alpha 2 ( Cnga2 ), which is essential for regulating odorant signal transduction, resulted in impaired spatial memory and social behaviors, as well as anxiety-like behavior . OSN-specific overexpression of M71 odorant receptors produced aberrant odor-evoked neural activity, leading to octanal-induced seizures and anxiety-like phenotypes . Genetic deletion of the recombination activation gene ( Rag-1 ), which is expressed not only in the lymphoid cells but also in OSNs, caused disorganized glomeruli structure in the OB, impaired odor-sensing, and anxiety-like behavior . Furthermore, a pharmacological olfactory lesion induced by an intraperitoneal injection of methimazole and an infusion of tetrodotoxin into the OB resulted in increased freezing behaviors during retrieval of conditioned fear . Differences in observed behavioral phenotypes in these rodent models may be explained by different experimental approaches used to impair olfactory function. Although these results reinforce the importance of the underappreciated olfactory pathways informing regulation of higher brain function, the underlying molecular and neural circuit mechanisms remain obscure. It should be noted that rodents largely depend on olfaction for sensing the external world—more than humans do—and that their performance in specific behavioral phenotypes such as social behaviors, heavily depends on olfaction. By examining the behavioral impact of manipulating specific olfactory-prefrontal neural circuits such as the OB-AON-mPFC and OB-Pir-OFC pathways, rodent models may help us to identify neural substrates involved in olfactory modulation of higher brain functions. In particular, it is of interest to examine the roles of olfactory-prefrontal circuits in the modulation of behavioral outcomes in positive valence systems, social processes, and cognitive systems: areas in which disturbances are associated with olfactory impairments in psychiatric disorders . In order to examine whether nasal inflammation-induced disturbance in the peripheral olfactory system indeed causes behavioral alterations of neuropsychiatric relevance, we have recently investigated the adverse effects of chronic and local OE inflammation on behavioral consequences using the aforementioned inducible olfactory inflammation (IOI) mouse model . While no abnormalities in locomotion and anxiety phenotype were observed, IOI mice exhibited impairment of sociability and preference for social novelty in the three-chamber social interaction test, suggesting that chronic OE inflammation impairs social behaviors that highly rely on olfactory cues in rodents . The sucrose preference test was also used to characterize a loss of consummatory pleasure, which is a component of anhedonia . IOI mice exhibited a lower preference for the sucrose solution compared to controls, suggesting that OE inflammation may dampen consummatory pleasure. Considering that olfactory deficits are correlated with social and cognitive abnormalities, as well as negative symptoms in psychosis , relevant behavioral domains in this mouse model warrant future investigation. In addition, rodent models of exposure to environmental risk factors for psychosis such as air pollutants and microbial infections displayed OE inflammation and OB volume loss . Adolescent mice chronically exposed to air pollutants showed learning and memory deficits . Chronic exposure to ozone has also been shown to impair olfactory perception and social recognition memory in rats . The importance of OE inflammation’s impact on higher brain function is also supported by rodent models of chronic rhinosinusitis. These models produce olfactory impairments, OSN loss, OB volume reduction, OB-mPFC circuit disruption, and behavioral abnormalities including altered social behaviors . These results, together with the epidemiological evidence described above, suggest that the OE may be an entry point for the deleterious effect of inflammatory environmental factors such as air pollutants and viral infection on the CNS. The question arises of how local OE inflammation contributes to neural dysfunction underlying behavioral phenotypes relevant to psychiatric disorders. There are several possibilities to be explored for the mechanistic routes of OE inflammation impacting the CNS: 1) OSN dysfunction induced by OE inflammation resulting in abnormalities in the structural or functional composition of the OB; 2) OE inflammation spreading to the CNS through the OB; and/or 3) direct translocation of virus and air pollutants to the olfactory system. These changes may contribute to disturbance of olfactory-prefrontal circuits, leading to neurobehavioral consequences relevant to psychiatric symptoms (Fig. ). As described above, olfactory deficits are observed early in disease progression, including during the prodromal stages . Adolescence is a critical period for both PFC maturation and the prodromal process of psychiatric disorders and the peripheral olfactory system displays a high degree of plasticity . Thus, it is worth investigating whether OE inflammation-induced molecular and neural alterations during developmental periods such as adolescence may lead to long-lasting behavioral abnormalities in adulthood. The IOI mouse model may be useful in investigating this possibility, as it allows us to temporally control OE-specific expression of TNF, provoking local and chronic OE inflammation . One can envision that OSN dysfunction may be a major contributory factor mediating the adverse neurobehavioral effects of OE inflammation. Crossing TRE-inward rectifier potassium channel transgenic mice with the olfactory marker protein (Omp) promoter-driven tTA line ( Omp-tTA;TRE-Kir2.1 ) will allow us to determine whether non-inflammatory OSN dysfunction negatively impacts olfactory-prefrontal circuits and resultant behaviors. Given that genetic risk factors play an important role in the etiological complexities of psychiatric diseases, it is also crucial to explore the convergent mechanisms of genetic risk factors and nasal inflammation. In the past decade, multiple lines of evidence from clinical studies and neuroscience research have shed light on the underappreciated olfactory pathway for regulation of higher brain function and its implications for the pathophysiology of psychiatric disorders such as schizophrenia and depression in a cross-disease manner. As growing evidence suggests that inflammatory processes play a role in disease pathophysiology, it is important to explore if and how OE inflammation induces molecular and neuronal alterations in the OE that provoke impairment in olfactory circuit-mediated brain systems, leading to neurobehavioral consequences relevant to these disease conditions. This area of research is particularly critical when we consider the current outbreak of COVID-19, which may increase the risk of psychiatric disorders via nasal inflammation. However, it should be noted that the psychological stress produced by social isolation and restriction may also be involved in the increased risk of psychiatric disorders in patients with COVID-19. Although this variable is difficult to address in clinical research, it is testable in preclinical studies. By using rodent models of social isolation and SARS-CoV-2 infection, we can explore how social isolation may have a convergent effect with SARS-CoV-2 infection on brain function, which may include disturbances involved in the psychopathology of psychiatric disorders. Psychiatric consequences of COVID-19 should also be longitudinally monitored, and future preclinical investigations are needed to characterize the pathological effect of SARS-CoV-2 infection on brain function. In summary, the findings discussed above suggest that nasal inflammation impairs the peripheral olfactory system and may affect olfactory-higher brain circuits, leading to neurobehavioral abnormalities. Further research into the molecular, cellular, and circuit mechanisms underlying the effects of nasal inflammation on brain function is crucial in addressing how OE inflammation contributes to the adverse effects of environmental factors such as air pollutants and viral infection on the central nervous system, potentially leading to neuropsychopathology relevant to psychiatric disorders. |
Comparison of commonly used software pipelines for analyzing fungal metabarcoding data | 5bcd7d16-abdc-4c9e-9a4e-53191d2d7c44 | 11566164 | Microbiology[mh] | Fungi comprise a clade of eukaryotes with diverse life forms. They are colonizing every habitat on the planet, utilizing all substrates including other living organisms . Remarkably, over 90% of all known fungal species inhabit soil , where they are known to play key roles in nutrient cycling, impacting environmental physicochemical properties, and the health of other eukaryotes . Even so, it has been estimated that more than 90% of fungal taxa have not yet been discovered . Traditionally, fungal taxonomy has relied on laboratory cultures and the identification of fruiting bodies, but this is a relatively inefficient taxonomic method due to the diverse morphological and developmental features of fungi, especially as not all taxa are culturable. In recent years, high-throughput-sequencing (HTS) methods have resulted in an exponential increase in the detection of new fungal species from various environments and matrices, including living organisms . With this culture-independent technique, individual fungal species can be identified from sequences of DNA or RNA extracted from various sample types , allowing the comparison of fungal taxa and communities between various environments . Commonly, the nuclear 18S (small subunit, SSU) and 28S (large subunit, LSU) ribosomal RNA (rRNA) genes as well as the internal transcribed spacer (ITS) region have been the focus of such studies . Among these, the ITS region, especially the ITS subregions ITS1 and ITS2, have proven the most useful loci for fungi identification due to their high interspecific variation . Prior to the statistical analysis of metabarcoding results, the generated sequences must undergo sequence processing (e.g. clustering, classification) and quality control. Various software pipelines have been developed and made freely available, each with a diverse set of tools; however, since these have mainly been developed for prokaryote 16S rRNA sequences, they are not all considered equally suitable for fungal ITS analysis . Among the most cited sequence analysis pipelines are mothur and DADA2 . Mothur is a free, open-source software, which can be executed in the command line . It incorporates OTU-clustering by the robust and memory-efficient OptiClust algorithm . Mothur provides a fully transparent workflow, allowing the user to track all steps during sequence processing. All commands can be specified by the user, if needed. Although originally designed for prokaryote (16S rRNA) amplicon sequencing , mothur is also commonly used for other HTS studies with various barcode markers. DADA2 is also an open-source software, available as an R package. It includes a full sequencing workflow and applies the most frequently used ASV-construction method, allowing accurate and high-resolution community construction . A readily available workflow is provided and it is applicable for any target locus (short and long reads), although the application to certain loci is debated . The first steps of sequence processing in both of the above software pipelines include primer trimming, and removal (or ‘filtering’) of poor-quality reads (e.g. those with low-quality scores, high number of homopolymers, ambiguous bases), non-target reads and the Illumina-specific merging of paired-end reads (in DADA2, the latter is carried out at a later step of the workflow). Finally, true biological variation is distinguished from unwanted, sequencing error-induced variation, sequences are compared to available databases, and taxa are identified. For fungi, given the limited knowledge of intraspecific variation, similar sequences are often aggregated into species-level operational taxonomic units (OTUs) to avoid overestimating species richness. Among several de novo clustering approaches, the OptiClust algorithm, implemented in the mothur pipeline, produces high-quality OTU assignments at low computational load while simultaneously evaluating clustering quality using the Matthews correlation coefficient (MCC) . Such clustering implies the setting of a ‘sequence similarity threshold’, which is usually a compromise between the highest possible differentiation between species, and correction for sequencing errors . Although a 97% similarity threshold is often used, it has been noted this may result in an underestimation of the true number of observed fungal species . Other studies highlight that a higher threshold (e.g. 99%) might be more appropriate for this OTU clustering method . Instead, DADA2 generates amplicon sequence variants (ASVs) based on an error model calculation, and assigns sequences with a minimum of one nucleotide difference to separate taxa, or removes these as potential noise . However, it has been suggested that, given the high levels of intragenomic variation of fungal ITS, the ASV approach for fungal ITS data may inflate the number of observed species (observed non-identical sequences) ; hence, the applicability of ASV approaches for the fungal ITS region is highly debated . Because initial data processing can impact the results and their interpretation, researchers must constantly evaluate the available bioinformatic opportunities in a fast-evolving field; therefore, studies comparing different pipelines are useful for promoting efficient workflows. Not only do the currently available software pipelines vary in their applicability to fungal metabarcoding data , but pipeline comparison studies have tended to focus on mock communities , and simulated datasets , both of which suffer from oversimplification. In fact, no pipeline developed thus far has performed satisfactorily when tested on fungal mock communities . Hence, testing the performance of different pipelines on fungal ITS datasets generated from complex field-collected environmental samples is of particular interest. Here, we evaluated the performance of mothur using both 97% and 99% identity thresholds and DADA2 in analyzing fungal communities from two different field-collected sample types often used for targeted metagenomic studies: fresh bovine feces and pasture soil. For a set of 19 biological replicates (10 bovine feces and nine soil samples from different sample sites), we compared alpha and beta diversity generated by the three different pipelines. In addition, for a set of 36 technical replicates of both sample types (one biological sample each of bovine feces and soil, amplified 18 times each), we compared the basic read output, community composition, taxonomic classification, homogeneity of results among the replicates, and capacity of each pipeline to detect OTUs/ASVs. Furthermore, we examined the impact of different similarity thresholds for OTU clustering on fungal community results. To our knowledge, this is the first time a comparison of the performance of the OptiClust OTU clustering method has been compared to that of other pipelines for fungal metabarcoding data. Dataset Fresh bovine fecal and pasture soil samples (in total 19 samples) were collected in June 2019 from an Alpine pasture in the Long-Term Social-Ecological Research Area (LTSER) Val Mazia/Matschertal (Province of Bolzano, Italy) as part of the EUREGIO project Microvalu (as described in refs. and ). These sample types were selected for testing the performance of the selected pipelines, as they represent two highly diverse fungal sources from this grassland ecosystem. Three to four bovine fecal samples were collected from each of three different sites (approx. 500 m apart) at an elevation of 1500 m a.s.l. from freshly deposited cow pats. For each of the 10 samples (= biological replicates), about 50 g of feces were collected from three points on the pat, transferred to a sterile polypropylene tube and mixed thoroughly using a sterile spatula; the samples were then transported to the Fondazione E. Mach at 4 °C, and archived at -80 °C until further processing. Bulk soil (Lithic Leptosol – World Reference Base for Soil Resources) was collected from three pastured grassland sites (approx. 500 m apart) at an elevation of 2500 m a.s.l. The vegetation cover was carefully removed with a shovel and soil was gathered from the upper mineral horizon at 12–20 cm soil depth (Ah horizon). Each bulk soil sample was composed of 10 subsamples (approx. 100 g each), which were combined into a composite soil sample. In total, nine soil samples (= biological replicates) were prepared, transported to the Universität Innsbruck at ambient temperature after a few hours, and processed following and . In brief, 100 mg of soil sieved at 1 mm from each biological replicate was suspended in 10 ml of sterile ¼ Ringer containing 0.01% (v/v) Tween ® 80 solution in a sterile polypropylene tube. The soil solution was shaken on an overhead shaker for 10 min at 90 rpm and treated in an ultrasonic bath for 1 min. The soil slurry was centrifuged at 10,000 x g for 15 min and the supernatant was discarded. DNA extraction, amplification and ITS2 sequencing DNA was extracted from each of the nine soil and 10 bovine feces biological replicates using the NucleoSpin ® Soil kit (Macherey-Nagel, Germany) to allow direct comparison of microbiota of the different sample types and following the manufacturer’s protocol, with minor modifications, i.e.: (i) homogenization time was doubled, (ii) buffer SL1 was used for the lysis step, and (iii) a volume of 50 µL of enhancer buffer (SX) was added to the sample during lysis. For whole DNA extraction, 70 mg of fecal matter and 30 mg of soil slurry were used as input biomass, respectively. Extraction controls, containing no sample material (lysis buffer only), were included to exclude contaminations in subsequent analyses. Purity and quantity of the DNA extracts were checked by UV/VIS spectrometry using a Spark ® multimode microplate reader (Tecan, Switzerland). The ITS2 region, which is recommended for fungal biodiversity studies and is widely used , was selected for amplification. Our primers of choice were ITS4_ILL and gITS7_ILL , of which both are among primers recommended for high-throughput identification of fungi and result in high fungal coverage . To generate technical replicates, one randomly selected biological replicate each of the bovine feces and soil samples was amplified 18 times. For the amplification of the fungal ITS2 region, 9 ng of extracted DNA was mixed with 1x FastStart High Fidelity Reaction Buffer (Roche Applied Science), 1.5 U of FastStart High Fidelity Enzyme Blend (Roche Applied Science) and primers ITS4_ILL / gITS7_ILL to a final concentration of 0.4 µM, resulting in 30 µL final PCR mix per replicate. PCR reactions were performed on a Veriti™ 96-Well Fast Thermal Cycler (Applied Biosystems, USA) using the following conditions: 3 min at 95 °C, followed by 31 cycles of 30 s at 95 °C, 30 s at 50 °C, 30 s at 72 °C, and a single final extension step of 7 min at 72 °C. Non-template controls (amplification controls), containing no DNA but amplification mix only, were included. Quality of the amplicons was checked by performing a high-resolution capillary electrophoresis using the QIAxcel Advanced System (QIAGEN). High-throughput sequencing was performed by the FEM Sequencing and Genotyping Platform (San Michele all’Adige, Italy) on an Illumina MiSeq Standard Flow Cell, using v3 chemistry and 300 bp paired-end reads and a minimum depth of 30,000 reads per sample. For sequence processing, we used two software platforms (DADA2 and mothur) and created the following three ‘pipelines’: DADA2, generating ASVs (hereafter ‘dada2 pipeline’); mothur, generating OTUs with a similarity threshold of 97% (‘mothur_97% pipeline’) or 99% (‘mothur_99% pipeline’), using default commands given by each of the software publishers (Fig. ). The same quality filtering and taxonomy assignment settings were adopted in both pipelines to facilitate the comparison of results (see conditions below). Additionally, raw read processing was also conducted with the default settings or recommended standard operating procedures for quality filtering and taxonomy assignment of fungal reads. This allowed us to evaluate the results of each pipeline using the most commonly used approaches in microbial ecology. The applied settings and the results of this additional analysis are provided in the supplement (see Supplementary Information, Material & Methods and Results & Discussion sections). Raw sequencing data were deposited in the NCBI Sequence Read Archive (SRA) and are accessible under the BioProject ID: PRJNA1055419. Details of the samples are provided in the Supplementary Table (Supplementary Material ). Bioinformatic downstream analysis generating ASVs – dada2 pipeline In the dada2 pipeline, barcode free, paired-end reads of demultiplexed samples were processed following an ITS-specific adaptation of the 1.8 DADA2 tutorial workflow ( https://benjjneb.github.io/dada2/ITS_workflow.html ), using the DADA2 package in R (v 4.2.0, 23). Primers were removed with Cutadapt and, reads were quality filtered using the filterAndtrim function and the following settings (unified among pipelines): reads less than 100 bp in length, having ambiguous bases or ‘bad quality’ were discarded, where bad quality reads were defined as reads not passing the filterAndtrim -parameter maxEE = c(2,2). The settings maxN, minLen and maxEE were the specified optional arguments in the filterAndtrim function. Reads were not truncated to uniform lengths, to maintain length polymorphisms of the ITS region . ASVs were generated using the DADA2 inference algorithm by the learnErrors and dada functions. Reads were merged ( mergePairs function) and chimeric sequences were removed ( removeBimeraDenovo function). Taxonomic classification was assigned ( assignTaxonomy function) using the UNITE (v8.3) database and the RDP Naive Bayesian Classifier algorithm . The bootstrap cutoff for assignment was set to 80% for all pipelines. Bioinformatic downstream analysis generating OTUs – mothur_97% and mothur_99% pipelines Fungal OTUs were constructed using mothur (v.1.48.0) following the MiSeq SOP (last access 6/10/22) . Forward and reverse reads were merged using the make.contigs function setting the following parameters (unified among pipelines): maxambig = 0, maxee = 2, deltaq = 0. No read length truncation was performed for the reasons explained above. After primers were trimmed with the trim.seq function, sequences with less than 100 bp in length were discarded. Sequences were pre-clustered allowing a maximum of three base pair differences between reads; chimeric sequences were also removed. Sequences were classified with the classify.seqs function using the UNITE (v8.3) database and applying the RDP Naive Bayesian Classifier algorithm . The bootstrap cutoff for the taxonomy assignment was unified for all pipelines (set to 80%). OTUs were identified after calculating distances between sequences ( pairwise.seqs function) and clustering sequences using OptiClust with an identity level of either 97% (mothur_97% pipeline) or 99% (mothur_99% pipeline). Clustering with different identity levels was the only step in the workflow, where the two mothur pipelines differed from one other. Finally, the consensus taxonomy for each OTU was determined with the classify.otu command. Statistical analysis Statistical analyses and graphical outputs were conducted using the microeco and phyloseq packages in R . Contaminant OTUs/ASVs were identified and removed by comparing sample data with that of extraction and amplification controls using the decontam package . Rare OTUs/ASVs were removed based on a relative abundance threshold (pipeline-specific), which was applied sample-wise to account for different sequencing depths among libraries. The threshold was defined based on the relative account of singletons and doubletons among the libraries and calculated as follows: first, the mean read count per library was calculated and then, the proportion of 3 reads relative to this mean read count was determined to establish the threshold. This threshold was applied to each library independently, setting all OTUs/ASVs with a relative abundance below this threshold to zero. We calculated the threshold separately for each pipeline due to variations in the mean read count per library among pipelines (threshold for dada2: 0.0236%, threshold for both mothur pipelines: 0.0156%). On average, single libraries contained 12,700 (dada2) and 19,200 reads (both mothur pipelines). Removing OTUs/ASVs with a read count below 3 in one sample corresponds to excluding those with a relative abundance below a pipeline-specific threshold of 0.0236% (dada2) and 0.0156% for both mothur pipelines. After applying these relative abundance thresholds to our OTU/ASV-tables, the minimum read counts in libraries ranged from 2 reads (in libraries exhibiting low sequencing depth) to 9 reads (in libraries exhibiting high sequencing depth). On average, the minimum read count per sample was 3.59 for dada2 and 3.54 for both mothur pipelines. Outputs of the three pipelines (dada2, mothur_97%, mothur_99%) were merged into one R object and compared. Four criteria were used to evaluate pipeline performance: (i) among biological replicates the proportion of the fungal community (as measured by alpha and beta diversity) captured by each pipeline was evaluated; among technical replicates we examined (ii) the proportion of OTUs/ASVs that were classified at each taxonomic level, (iii) the homogeneity of relative abundances of the most abundant genera between replicates and (iv) the capacity of each pipeline to detect OTUs/ASVs across replicates. To estimate alpha diversity in biological replicates, the number of observed OTUs/ASVs (richness), as well as Shannon and Simpson indices for bovine feces and soil samples using the microeco package were determined. Alpha diversity was compared between pipelines within one sample type and between sample types within one pipeline using Duncan’s multiple range test for one-way ANOVA; p -values were adjusted using the Benjamini & Hochberg correction . Beta diversity in biological replicates was represented by NMDS-ordination plots based on Bray-Curtis dissimilarities between samples. Significant differences between fungal communities of sample types and sampling sites were identified by applying a PERMANOVA (999 permutations). The betadisper function of the vegan package was used to estimate the multivariate homogeneity of group dispersions (variances) and differences between sample types were examined with an ANOVA. Proportions of classified OTUs/ASVs to unclassified OTUs/ASVs at phylum and genus level were calculated for each sample type and as means across technical replicates ( n = 18). Significant effects on the relative abundance of the top 15 most abundant genera associated with pipelines were identified with a GLM and a Kruskal Wallis one-way ANOVA using the ALDEx2 package , which takes the compositionality of barcoding data into account. Heterogeneity within one pipeline was calculated as follows: first, the mean and standard deviation of the relative abundance of the top 15 most abundant genera among all technical replicates ( n = 18) per sample type was calculated. Then, the proportions of standard deviation to mean relative abundance (= coefficient of variation) were calculated for every genus. The mean of all proportions was considered as an index assessing pipeline heterogeneity among technical replicates and compared between pipelines. A stepwise addition of OTUs/ASVs found in individual technical replicates (replicate no. 1 to no. 18, for each sample type) was calculated and these cumulative OTU/ASV numbers were represented as a line plot. The number of private OTUs/ASVs, i.e. those detected exclusively in individual replicates, was compared among pipelines. Rank abundance curves were calculated to visualize the evenness of OTU/ASV abundances among the different pipelines, using the BiodiversityR package . Fresh bovine fecal and pasture soil samples (in total 19 samples) were collected in June 2019 from an Alpine pasture in the Long-Term Social-Ecological Research Area (LTSER) Val Mazia/Matschertal (Province of Bolzano, Italy) as part of the EUREGIO project Microvalu (as described in refs. and ). These sample types were selected for testing the performance of the selected pipelines, as they represent two highly diverse fungal sources from this grassland ecosystem. Three to four bovine fecal samples were collected from each of three different sites (approx. 500 m apart) at an elevation of 1500 m a.s.l. from freshly deposited cow pats. For each of the 10 samples (= biological replicates), about 50 g of feces were collected from three points on the pat, transferred to a sterile polypropylene tube and mixed thoroughly using a sterile spatula; the samples were then transported to the Fondazione E. Mach at 4 °C, and archived at -80 °C until further processing. Bulk soil (Lithic Leptosol – World Reference Base for Soil Resources) was collected from three pastured grassland sites (approx. 500 m apart) at an elevation of 2500 m a.s.l. The vegetation cover was carefully removed with a shovel and soil was gathered from the upper mineral horizon at 12–20 cm soil depth (Ah horizon). Each bulk soil sample was composed of 10 subsamples (approx. 100 g each), which were combined into a composite soil sample. In total, nine soil samples (= biological replicates) were prepared, transported to the Universität Innsbruck at ambient temperature after a few hours, and processed following and . In brief, 100 mg of soil sieved at 1 mm from each biological replicate was suspended in 10 ml of sterile ¼ Ringer containing 0.01% (v/v) Tween ® 80 solution in a sterile polypropylene tube. The soil solution was shaken on an overhead shaker for 10 min at 90 rpm and treated in an ultrasonic bath for 1 min. The soil slurry was centrifuged at 10,000 x g for 15 min and the supernatant was discarded. DNA was extracted from each of the nine soil and 10 bovine feces biological replicates using the NucleoSpin ® Soil kit (Macherey-Nagel, Germany) to allow direct comparison of microbiota of the different sample types and following the manufacturer’s protocol, with minor modifications, i.e.: (i) homogenization time was doubled, (ii) buffer SL1 was used for the lysis step, and (iii) a volume of 50 µL of enhancer buffer (SX) was added to the sample during lysis. For whole DNA extraction, 70 mg of fecal matter and 30 mg of soil slurry were used as input biomass, respectively. Extraction controls, containing no sample material (lysis buffer only), were included to exclude contaminations in subsequent analyses. Purity and quantity of the DNA extracts were checked by UV/VIS spectrometry using a Spark ® multimode microplate reader (Tecan, Switzerland). The ITS2 region, which is recommended for fungal biodiversity studies and is widely used , was selected for amplification. Our primers of choice were ITS4_ILL and gITS7_ILL , of which both are among primers recommended for high-throughput identification of fungi and result in high fungal coverage . To generate technical replicates, one randomly selected biological replicate each of the bovine feces and soil samples was amplified 18 times. For the amplification of the fungal ITS2 region, 9 ng of extracted DNA was mixed with 1x FastStart High Fidelity Reaction Buffer (Roche Applied Science), 1.5 U of FastStart High Fidelity Enzyme Blend (Roche Applied Science) and primers ITS4_ILL / gITS7_ILL to a final concentration of 0.4 µM, resulting in 30 µL final PCR mix per replicate. PCR reactions were performed on a Veriti™ 96-Well Fast Thermal Cycler (Applied Biosystems, USA) using the following conditions: 3 min at 95 °C, followed by 31 cycles of 30 s at 95 °C, 30 s at 50 °C, 30 s at 72 °C, and a single final extension step of 7 min at 72 °C. Non-template controls (amplification controls), containing no DNA but amplification mix only, were included. Quality of the amplicons was checked by performing a high-resolution capillary electrophoresis using the QIAxcel Advanced System (QIAGEN). High-throughput sequencing was performed by the FEM Sequencing and Genotyping Platform (San Michele all’Adige, Italy) on an Illumina MiSeq Standard Flow Cell, using v3 chemistry and 300 bp paired-end reads and a minimum depth of 30,000 reads per sample. For sequence processing, we used two software platforms (DADA2 and mothur) and created the following three ‘pipelines’: DADA2, generating ASVs (hereafter ‘dada2 pipeline’); mothur, generating OTUs with a similarity threshold of 97% (‘mothur_97% pipeline’) or 99% (‘mothur_99% pipeline’), using default commands given by each of the software publishers (Fig. ). The same quality filtering and taxonomy assignment settings were adopted in both pipelines to facilitate the comparison of results (see conditions below). Additionally, raw read processing was also conducted with the default settings or recommended standard operating procedures for quality filtering and taxonomy assignment of fungal reads. This allowed us to evaluate the results of each pipeline using the most commonly used approaches in microbial ecology. The applied settings and the results of this additional analysis are provided in the supplement (see Supplementary Information, Material & Methods and Results & Discussion sections). Raw sequencing data were deposited in the NCBI Sequence Read Archive (SRA) and are accessible under the BioProject ID: PRJNA1055419. Details of the samples are provided in the Supplementary Table (Supplementary Material ). In the dada2 pipeline, barcode free, paired-end reads of demultiplexed samples were processed following an ITS-specific adaptation of the 1.8 DADA2 tutorial workflow ( https://benjjneb.github.io/dada2/ITS_workflow.html ), using the DADA2 package in R (v 4.2.0, 23). Primers were removed with Cutadapt and, reads were quality filtered using the filterAndtrim function and the following settings (unified among pipelines): reads less than 100 bp in length, having ambiguous bases or ‘bad quality’ were discarded, where bad quality reads were defined as reads not passing the filterAndtrim -parameter maxEE = c(2,2). The settings maxN, minLen and maxEE were the specified optional arguments in the filterAndtrim function. Reads were not truncated to uniform lengths, to maintain length polymorphisms of the ITS region . ASVs were generated using the DADA2 inference algorithm by the learnErrors and dada functions. Reads were merged ( mergePairs function) and chimeric sequences were removed ( removeBimeraDenovo function). Taxonomic classification was assigned ( assignTaxonomy function) using the UNITE (v8.3) database and the RDP Naive Bayesian Classifier algorithm . The bootstrap cutoff for assignment was set to 80% for all pipelines. Fungal OTUs were constructed using mothur (v.1.48.0) following the MiSeq SOP (last access 6/10/22) . Forward and reverse reads were merged using the make.contigs function setting the following parameters (unified among pipelines): maxambig = 0, maxee = 2, deltaq = 0. No read length truncation was performed for the reasons explained above. After primers were trimmed with the trim.seq function, sequences with less than 100 bp in length were discarded. Sequences were pre-clustered allowing a maximum of three base pair differences between reads; chimeric sequences were also removed. Sequences were classified with the classify.seqs function using the UNITE (v8.3) database and applying the RDP Naive Bayesian Classifier algorithm . The bootstrap cutoff for the taxonomy assignment was unified for all pipelines (set to 80%). OTUs were identified after calculating distances between sequences ( pairwise.seqs function) and clustering sequences using OptiClust with an identity level of either 97% (mothur_97% pipeline) or 99% (mothur_99% pipeline). Clustering with different identity levels was the only step in the workflow, where the two mothur pipelines differed from one other. Finally, the consensus taxonomy for each OTU was determined with the classify.otu command. Statistical analyses and graphical outputs were conducted using the microeco and phyloseq packages in R . Contaminant OTUs/ASVs were identified and removed by comparing sample data with that of extraction and amplification controls using the decontam package . Rare OTUs/ASVs were removed based on a relative abundance threshold (pipeline-specific), which was applied sample-wise to account for different sequencing depths among libraries. The threshold was defined based on the relative account of singletons and doubletons among the libraries and calculated as follows: first, the mean read count per library was calculated and then, the proportion of 3 reads relative to this mean read count was determined to establish the threshold. This threshold was applied to each library independently, setting all OTUs/ASVs with a relative abundance below this threshold to zero. We calculated the threshold separately for each pipeline due to variations in the mean read count per library among pipelines (threshold for dada2: 0.0236%, threshold for both mothur pipelines: 0.0156%). On average, single libraries contained 12,700 (dada2) and 19,200 reads (both mothur pipelines). Removing OTUs/ASVs with a read count below 3 in one sample corresponds to excluding those with a relative abundance below a pipeline-specific threshold of 0.0236% (dada2) and 0.0156% for both mothur pipelines. After applying these relative abundance thresholds to our OTU/ASV-tables, the minimum read counts in libraries ranged from 2 reads (in libraries exhibiting low sequencing depth) to 9 reads (in libraries exhibiting high sequencing depth). On average, the minimum read count per sample was 3.59 for dada2 and 3.54 for both mothur pipelines. Outputs of the three pipelines (dada2, mothur_97%, mothur_99%) were merged into one R object and compared. Four criteria were used to evaluate pipeline performance: (i) among biological replicates the proportion of the fungal community (as measured by alpha and beta diversity) captured by each pipeline was evaluated; among technical replicates we examined (ii) the proportion of OTUs/ASVs that were classified at each taxonomic level, (iii) the homogeneity of relative abundances of the most abundant genera between replicates and (iv) the capacity of each pipeline to detect OTUs/ASVs across replicates. To estimate alpha diversity in biological replicates, the number of observed OTUs/ASVs (richness), as well as Shannon and Simpson indices for bovine feces and soil samples using the microeco package were determined. Alpha diversity was compared between pipelines within one sample type and between sample types within one pipeline using Duncan’s multiple range test for one-way ANOVA; p -values were adjusted using the Benjamini & Hochberg correction . Beta diversity in biological replicates was represented by NMDS-ordination plots based on Bray-Curtis dissimilarities between samples. Significant differences between fungal communities of sample types and sampling sites were identified by applying a PERMANOVA (999 permutations). The betadisper function of the vegan package was used to estimate the multivariate homogeneity of group dispersions (variances) and differences between sample types were examined with an ANOVA. Proportions of classified OTUs/ASVs to unclassified OTUs/ASVs at phylum and genus level were calculated for each sample type and as means across technical replicates ( n = 18). Significant effects on the relative abundance of the top 15 most abundant genera associated with pipelines were identified with a GLM and a Kruskal Wallis one-way ANOVA using the ALDEx2 package , which takes the compositionality of barcoding data into account. Heterogeneity within one pipeline was calculated as follows: first, the mean and standard deviation of the relative abundance of the top 15 most abundant genera among all technical replicates ( n = 18) per sample type was calculated. Then, the proportions of standard deviation to mean relative abundance (= coefficient of variation) were calculated for every genus. The mean of all proportions was considered as an index assessing pipeline heterogeneity among technical replicates and compared between pipelines. A stepwise addition of OTUs/ASVs found in individual technical replicates (replicate no. 1 to no. 18, for each sample type) was calculated and these cumulative OTU/ASV numbers were represented as a line plot. The number of private OTUs/ASVs, i.e. those detected exclusively in individual replicates, was compared among pipelines. Rank abundance curves were calculated to visualize the evenness of OTU/ASV abundances among the different pipelines, using the BiodiversityR package . Differences in fungal communities estimated by the three pipelines The alpha and beta diversity of the fungal communities identified by the three pipelines was compared in a set of biological replicates to assess the similarity of pipelines outcomes. For both sample types, the dada2 pipeline identified a significantly lower absolute number of ASVs (richness) and had significantly higher estimates of both Shannon and Simpson indices compared to the mothur_97% and mothur_99% pipelines (Fig. ). The mothur_97% pipeline generated significantly lower alpha diversity measures than mothur_99% in bovine fecal samples. In soil samples, the alpha diversity measures followed the same trend, although differences between pipelines were not significant. All pipelines consistently identified higher numbers of observed OTUs/ASVs (only mothur_99% significantly so), Shannon and Simpson indices (all three pipelines) in bovine feces samples compared with soil (Fig. ). Notably, the alpha diversity outcomes were not consistent between pipelines when choosing pipeline-specific default settings (quality thresholds and bootstrapping cut-off) during sequence processing; e.g. Shannon measures were significantly higher in soil compared to bovine feces with dada2, while the opposite was the case with mothur_99% (see Supplementary Information, Supporting Results & Discussion section, Figure ). The fungal community compositions of bovine feces and soil samples were significantly different (PERMANOVA, p < 0.001), and consistently, no differences between sampling sites were found for each sample type for each pipeline (PERMANOVA, p > 0.05; Figure A). However, beta diversity varied significantly with pipeline (PERMANOVA, p < 0.001). NMDS-plots based on Bray-Curtis dissimilarities showed that the sample clustering was similar for both mothur pipelines, whereas the sample distribution using dada2 was different. The analysis of multivariate homogeneity of group variances revealed that all pipelines consistently identified significant differences between bovine feces and soil samples. In both sample types, dada2 had the highest distances between samples and group centres (centroids) and mothur_97% the lowest (Figure B). The alpha and beta diversity of the fungal communities identified by the three pipelines was compared in a set of biological replicates to assess the similarity of pipelines outcomes. For both sample types, the dada2 pipeline identified a significantly lower absolute number of ASVs (richness) and had significantly higher estimates of both Shannon and Simpson indices compared to the mothur_97% and mothur_99% pipelines (Fig. ). The mothur_97% pipeline generated significantly lower alpha diversity measures than mothur_99% in bovine fecal samples. In soil samples, the alpha diversity measures followed the same trend, although differences between pipelines were not significant. All pipelines consistently identified higher numbers of observed OTUs/ASVs (only mothur_99% significantly so), Shannon and Simpson indices (all three pipelines) in bovine feces samples compared with soil (Fig. ). Notably, the alpha diversity outcomes were not consistent between pipelines when choosing pipeline-specific default settings (quality thresholds and bootstrapping cut-off) during sequence processing; e.g. Shannon measures were significantly higher in soil compared to bovine feces with dada2, while the opposite was the case with mothur_99% (see Supplementary Information, Supporting Results & Discussion section, Figure ). The fungal community compositions of bovine feces and soil samples were significantly different (PERMANOVA, p < 0.001), and consistently, no differences between sampling sites were found for each sample type for each pipeline (PERMANOVA, p > 0.05; Figure A). However, beta diversity varied significantly with pipeline (PERMANOVA, p < 0.001). NMDS-plots based on Bray-Curtis dissimilarities showed that the sample clustering was similar for both mothur pipelines, whereas the sample distribution using dada2 was different. The analysis of multivariate homogeneity of group variances revealed that all pipelines consistently identified significant differences between bovine feces and soil samples. In both sample types, dada2 had the highest distances between samples and group centres (centroids) and mothur_97% the lowest (Figure B). Sequencing yielded a total of 479,936 reads in bovine feces and 506,300 reads in soil technical replicates. After processing through the three pipelines, the basic read outputs were compared (Table ). Homogeneity of sequencing depths was checked via rarefaction curve calculation (Figure ). After processing, both mothur pipelines retained a higher total number of reads per sample type and mean per replicate, whereas dada2 retained a lower number of total and mean reads (Table ). Likewise, the number of observed OTUs/ASVs (total per sample type and mean per replicate) was higher for mothur pipelines (highest with mothur_99%) than dada2. The removal of rare OTUs/ASVs also impacted the pipelines differently, causing a ~ 5% loss of OTUs/ASVs with dada2, ~ 57% loss with mothur_97% and ~ 79% loss with mothur_99%. Due to this step, the differences in observed OTUs/ASVs between pipelines was reduced: on average dada2 retained 681 ASVs for both sample types, mothur_97% 641 OTUs and mothur_99% kept 817 OTUs (Table ). Dada2 showed the highest proportion of classified phyla and genera in individual replicates, followed by mothur_99% and mothur_97% (Table ). The discrepancy between the sum and mean number of OTUs/ASVs obtained in the 18 replicates in each sample type differed across pipelines; for example, with dada2, the sum of all OTUs/ASVs was about five times higher than the mean number of OTUs/ASVs per individual replicate, whereas it was only three times higher for both mothur pipelines (Table ). Of note, an additional analysis with dada2 revealed that pooling all technical replicates before the ASV construction resulted in 20% fewer observed ASVs (in total) than pooling all replicates following ASV inferring. The identification of genera that were either commonly or exclusively detected by each pipeline (Fig. ) was performed following the removal of all OTUs/ASVs not classified at least to genus level. In bovine feces replicates, 101 genera (accounting for 98.9% of all reads) were detected by all pipelines, whereas 2–4 genera were uniquely identified by only one pipeline (Fig. A). All of these uniquely identified genera had low read counts (max. 20 reads per genus; Table ). Generally, the overlap of identified genera between mothur_97% and mothur_99% was higher than the overlap between dada2 and mothur pipelines. In soil replicates, 75 genera (accounting for 94.3% of all reads) were identified by all pipelines and 1–3 genera were uniquely identified by only one pipeline (Fig. B). Among these unique genera, generally exhibiting low read counts (maximum 12 reads), were two genera with high read counts (dada2: Calycina 6114 reads; mothur_99%: Preussia 647 reads; Table ). Homogeneity of relative abundances among pipelines and OTU/ASV detection Structural community composition was compared using the 15 most abundant genera among all pipelines in technical replicates (bovine feces and soil, respectively; Fig. ). Pipelines provided contrasting results regarding relative abundances of these selected genera for both sample types. In bovine feces, 13 out of 15 genera had significantly different relative abundances ( p < 0.001) among the tested pipelines. In soil samples, 9 out of 15 genera showed significantly different abundances ( p < 0.05) depending on the pipeline, e.g. dada2 failed to identify the genus Trichocladium ; while mothur pipelines missed the genus Calycina , both genera with high abundance according to the opposite software (Fig. ). While the genus Calycina was as least represented on a higher taxonomic level in mothur (e.g. family Hyaloscyphaceae, consistently found to be highly abundant by all pipelines), the genus Trichocladium was underrepresented by dada2 even at higher taxonomic levels (e.g. family Chaetomiaceae: 67 reads, sum of all samples, abundance estimated by dada2). Importantly, dada2 showed a significantly higher heterogeneity between the technical replicates (ANOVA, p < 0.005), than mothur_99% and mothur_97% (Fig. ). A stepwise addition of OTUs/ASVs in technical replicates showed that dada2 and mothur_97% had a similar rate of increase of OTUs/ASVs in bovine feces samples, while mothur_99% showed a steeper increase. However, in soil samples, OTU/ASV numbers of both mothur pipelines (especially mothur_97%) plateaued, while the numbers found with dada2 showed a continuous steep increase (Fig. ). Importantly, we found that by reducing the sample size from 18 to three technical replicates (a number of replicates more commonly used in microbiota studies), dada2 only detected about 31% of all OTUs/ASVs (i.e. sum of 18 technical replicates for bovine feces and soil samples, respectively) compared with mothur_97% and mothur_99%, which detected about 43% of all OTUs/ASVs (sum of 18 technical replicates, mean of both mothur pipelines) in bovine feces samples, and almost 50% of all OTUs/ASVs (sum of 18 technical replicates, mean of both mothur pipelines) in soil samples (Fig. A, B). Of note, after processing our data with non-unified and specifically recommended quality thresholds for each pipeline (see Supplementary Information, Material & Methods section), stepwise addition of OTUs/ASVs in technical replicates showed that the number of OTUs/ASVs plateaued for mothur pipelines at a sample size of ± 10 samples, whereas dada2 showed an almost linear increase of OTU/ASV numbers (see Supplementary Information, Results & Discussion section, Figure ); at a sample size of three, dada2 detected 16.7% of all OTUs/ASVs, whereas mothur_97% and mothur_99% identified up to 66.7% of all OTUs/ASVs. Rank abundance curves of the top 50 most abundant OTUs/ASVs demonstrated that the evenness among the highly abundant OTUs/ASVs was higher with dada2 (moderate decrease of the line) than with both mothur pipelines (steep decrease of the line), and this was more evident for soil (Supplementary Information, Results & Discussion section, Figure A, B). Structural community composition was compared using the 15 most abundant genera among all pipelines in technical replicates (bovine feces and soil, respectively; Fig. ). Pipelines provided contrasting results regarding relative abundances of these selected genera for both sample types. In bovine feces, 13 out of 15 genera had significantly different relative abundances ( p < 0.001) among the tested pipelines. In soil samples, 9 out of 15 genera showed significantly different abundances ( p < 0.05) depending on the pipeline, e.g. dada2 failed to identify the genus Trichocladium ; while mothur pipelines missed the genus Calycina , both genera with high abundance according to the opposite software (Fig. ). While the genus Calycina was as least represented on a higher taxonomic level in mothur (e.g. family Hyaloscyphaceae, consistently found to be highly abundant by all pipelines), the genus Trichocladium was underrepresented by dada2 even at higher taxonomic levels (e.g. family Chaetomiaceae: 67 reads, sum of all samples, abundance estimated by dada2). Importantly, dada2 showed a significantly higher heterogeneity between the technical replicates (ANOVA, p < 0.005), than mothur_99% and mothur_97% (Fig. ). A stepwise addition of OTUs/ASVs in technical replicates showed that dada2 and mothur_97% had a similar rate of increase of OTUs/ASVs in bovine feces samples, while mothur_99% showed a steeper increase. However, in soil samples, OTU/ASV numbers of both mothur pipelines (especially mothur_97%) plateaued, while the numbers found with dada2 showed a continuous steep increase (Fig. ). Importantly, we found that by reducing the sample size from 18 to three technical replicates (a number of replicates more commonly used in microbiota studies), dada2 only detected about 31% of all OTUs/ASVs (i.e. sum of 18 technical replicates for bovine feces and soil samples, respectively) compared with mothur_97% and mothur_99%, which detected about 43% of all OTUs/ASVs (sum of 18 technical replicates, mean of both mothur pipelines) in bovine feces samples, and almost 50% of all OTUs/ASVs (sum of 18 technical replicates, mean of both mothur pipelines) in soil samples (Fig. A, B). Of note, after processing our data with non-unified and specifically recommended quality thresholds for each pipeline (see Supplementary Information, Material & Methods section), stepwise addition of OTUs/ASVs in technical replicates showed that the number of OTUs/ASVs plateaued for mothur pipelines at a sample size of ± 10 samples, whereas dada2 showed an almost linear increase of OTU/ASV numbers (see Supplementary Information, Results & Discussion section, Figure ); at a sample size of three, dada2 detected 16.7% of all OTUs/ASVs, whereas mothur_97% and mothur_99% identified up to 66.7% of all OTUs/ASVs. Rank abundance curves of the top 50 most abundant OTUs/ASVs demonstrated that the evenness among the highly abundant OTUs/ASVs was higher with dada2 (moderate decrease of the line) than with both mothur pipelines (steep decrease of the line), and this was more evident for soil (Supplementary Information, Results & Discussion section, Figure A, B). Metabarcoding is now an indispensable tool for community studies, but the bioinformatic analysis of the resulting large datasets is challenging; therefore, having practical guidelines available for choosing the appropriate pipeline(s) can save valuable time. In this study, we compared the OTU-clustering approach in mothur, using two identity thresholds, with the ASV-inferring method in DADA2 with metabarcoding data from the ITS2 region in fungal communities from bovine feces and soil samples. In detail, we compared the fungal output of three pipelines, which we named: dada2, mothur_97% and mothur_99%. They have been developed and commonly used for analyzing prokaryote (especially bacterial) amplicon sequences: while DADA2 software has been increasingly applied to fungal communities, mothur (either with 97 or 99% identity) has received relatively less attention. Our aim was to evaluate if and how these bioinformatic strategies impact fungal community diversity and composition in environmental samples. We processed 19 biological replicates (10 bovine feces and 9 soil replicates) and 36 technical replicates ( n = 18 replicates of one bovine feces and one soil sample) with the three different pipelines, and used four criteria to evaluate pipeline performance (specified in Materials and methods section). In general, although outputs of the three pipelines were significantly different for some indices, the conclusions (e.g. differences of alpha diversity between sample types) were consistent. However, processing sequencing data with pipeline-specific default quality thresholds and bootstrapping cut-offs which, to the best of our knowledge, is the most frequent analytical strategy, leads to opposing conclusions (see Supplementary Information, Material & Methods section; Figure ). Proportion of the fungal community captured by the pipeline (alpha and beta diversity) Some studies have suggested that ecological patterns of microbiota are quite robust regardless of the bioinformatic pipeline used to analyse amplicon datasets . This conclusion was confirmed in our study. We showed that the pipelines used here consistently identified significantly higher alpha diversity estimates in bovine feces compared with soil samples (albeit not significantly for dada2 and mothur_97% richness estimates). However, we also showed that the consistency among pipelines was due to the adoption of the same settings (quality thresholds and bootstrapping cut-off), instead of default settings, in each pipeline during sequence processing. By applying the same settings, within each sample type, the alpha diversity estimates differed substantially between the pipelines: dada2 exhibited the lowest species richness followed by mothur_97% and mothur_99%. Previous studies have also found such discrepancies between different sets of pipelines and suggested that absolute estimates of species richness should not be overinterpreted and that metabarcoding studies should focus on the differences between samples . Overall, our comparison revealed homogeneity among pipelines regarding diversity conclusions. However, we showed that this homogeneity is disrupted by adopting default and/or recommended pipeline settings. This highlights the importance of carefully specifying settings for pipelines. Adaptations might have a great impact on the outcomes regarding fungal community analysis (e.g. species richness, OTU/ASV-detection). Despite the similar conclusions drawn by the pipelines, their alpha diversity measures differed significantly. One of the most important differences between pipelines is the clustering or data-filtering approach generating OTUs or ASVs, already well-recognized as estimating different, pipeline-specific results for bacterial communities . However, the discussion whether OTU clustering or ASV inference is better suited for fungal ITS metabarcoding data is ongoing . Although ASV approaches were shown to recover mock communities of several fungal strains better than OTU clustering approaches , they might overestimate fungal diversity when using markers with a high level of intraspecific variation (e.g. ITS2 subregion). Due to the high sensitivity of ASV approaches, allelic variants of the ITS region will be assigned to different ASVs and inflate the fungal diversity . On the other hand, ASV approaches likely underestimate the richness of less prevalent fungal species, due to removal of less abundant ASVs during ASV construction . In contrast, comparing mothur’s OTU clustering with ASV approaches for 16S rRNA amplicon data analysis have suggested that mothur tends to overestimates richness . Our findings indicate significantly lower fungal richness in bovine feces and soil samples processed with the ASV approach (dada2) compared with the OTU clustering methods (mothur_97%, mothur_99%). On the contrary, higher Shannon and Simpson indices were estimated for both sample types with dada2 compared with mothur pipelines. This confirms an overestimation of abundant ASVs with the ASV approach. In addition, rank abundance curves showed that dada2 identified a higher number of highly abundant ASVs than both mothur pipelines (OTUs), further confirming an overestimation of abundant ASVs (Figure ). In addition to the clustering or data-filtering approaches of the three pipelines, the mothur pipelines were applied here using two identity thresholds for OTU clustering, which also affected the alpha diversity measures. Fungal sequences are commonly clustered to OTUs based on 97% identity ; however, higher thresholds have been proposed as more appropriate for fungal data . Here, we applied both 97% and 99% similarity thresholds and found significant differences across the corresponding fungal community in bovine feces (but not soil). In fact, mothur_97% resulted in lower numbers of observed OTUs (richness), as well as lower estimates for both Shannon and Simpson indices than mothur_99% among biological replicates. Lower diversity observed with mothur_97% may result from the collapsing of erroneous sequences or intragenomic variations into other / fewer OTUs, as well as from aggregating distinct species due to the 97% similarity threshold. On the other hand, a higher similarity threshold (e.g. 99%) could have retained more ‘true’ species , but also OTUs that originate from intragenomic variation . Due to intragenomic variations, multiple copies of the ITS region can occur within one species . This heterogeneity complicates species identification with HTS approaches and might lead to an overestimation of species richness. This overestimation could be more severe when applying higher similarity thresholds, as more intragenomic variation will be incorrectly clustered into distinct OTUs ; for example, in our environmental samples, clustering with 99% similarity (mothur_99%) resulted in more than twice as many OTUs as with 97% similarity (mothur_97%) (1499 and 4293 OTUs in bovine feces, 1512 and 3317 OTUs in soil samples using 97% and 99% similarity thresholds; including rare OTUs). Likewise, other studies found more fungal OTUs in environmental samples when applying similarity thresholds higher than 97% . We assume that an incorrect allocation of intragenomic variations to distinct OTUs was higher when using the higher similarity threshold (99%) in our analysis, and this could, combined with sequencing errors, lead to erroneously high richness results . However, with the present sequencing data and available tools, we cannot confirm or correct these errors. Even if the ITS region is appropriate for species identification for a broad range of fungi, it is clearly not appropriate for all fungi due to their different rates of evolution . Increasing the taxonomic coverage of reference databases, performing large-scale species identifications and adapting existing bioinformatic pipelines might be solutions for dealing with intragenomic variations in future studies . In addition to intragenomic variation, PCR and sequencing errors possibly also result in the generation of rare (and false) OTUs , and removing these invalid OTUs is recommended . Although we minimized PCR errors by using a High Fidelity Polymerase with 3’-5’ proofreading activity, we removed rare OTUs/ASVs (using a relative abundance threshold, applied sample-wise) from our dataset to avoid overestimation of alpha diversity. We found that many OTUs (748 and 3274 OTUs in bovine feces, 981 and 2701 OTUs in soil samples using 97% and 99% similarity thresholds, respectively) were discarded by filtering these rare OTUs. With mothur_99%, filtering removed about 79% of all OTUs, which is a much higher percentage than for mothur_97% results (57% OTUs removed). This is probably due to the lower number of rare OTUs identified with mothur_97%, since applying lower similarity thresholds (here: 97% compared with 99% similarity) results in merging of rare OTUs with other low-abundance or abundant OTUs . Overall, this filtering step converged the total numbers of OTUs/ASVs observed with different similarity thresholds (mothur_97% and mothur_99%) and clustering methods (dada2) (Table , total number of OTUs/ASVs in Post-processing section compared with Following sample-wise removal of rare OTUs/ ASVs section). These results are in line with the findings of , where similar richness estimates among different pipelines were achieved by filtering of rare OTUs. Proportion of OTUs/ASVs that were classified to genus level Although the same classifier (RDP Naive Bayesian Classifier algorithm) and the same minimum bootstrap confidence value (80% cutoff) was used for the taxonomic assignment for all pipelines, the ratio of identified phyla and genera in single technical replicates differed among the pipelines. Dada2 classified a higher proportion of OTUs/ASVs to phylum and genus level in single replicates compared with mothur pipelines; however, considering absolute numbers, dada2 identified a lower number of phyla and genera than both mothur pipelines, which is in line with the richness results of OTUs/ASVs among pipelines. Data processing with the pipelines mothur_97% and mothur_99% identified similar numbers of genera. In both sample types, about half of the identified genera (50% in bovine feces, 60% in soil) were detected by all pipelines, however those shared genera account for > 94% of all sequences found in bovine feces and soil samples, respectively. We conclude that, despite differences in their relative abundance estimates (see below), abundant genera were detected by all pipelines, and mainly rare genera, exhibiting only low read counts, were assigned to one specific pipeline. In fact, every pipeline identified a few unique genera with low read counts (2–20 reads). These could have possibly emerged from sequencing errors and a more stringent filtering of rare OTUs/ASVs would have eliminated most of these unique genera. Generally, taxonomic assignments at such high taxonomic resolution should not be overvalued, as taxonomic identification of OTUs/ASVs might be inadequate . Nevertheless, dada2 and mothur_99% each identified one abundant genus (dada2: Calycina 6114 reads; mothur_99%: Preussia 647 reads) in soil, which both are saprotroph (-symbiotroph) and decay dung or wood . Considering the trophic modes, we find it plausible that these genera were present in our soil samples, making it concerning that the pipelines missed them. However, lowering the taxonomic resolution to the family level revealed that all three pipelines identified the families to which these genera belong, but did not classify the associated OTUs further to the genus level. For future comparative studies, lowering the bootstrap threshold in taxonomic classification (losing reliability of the given results) could be considered to retain more fungal genera. Homogeneity of relative abundances and OTU/ASV detection in technical replicates Coherently with the discussion above, in our comparison of community compositions among pipelines we focused here on the top 15 most abundant fungal genera in both bovine feces and soil technical replicates. We found that the majority of abundant genera exhibited significantly inconsistent proportions among different pipelines. Of particular note is that some genera (e.g. Calycina , Delitschia ) were found to be highly abundant either in bovine feces and soil replicates by one pipeline (dada2 and mothur_97%), but were not even identified in most replicates by another pipeline (mothur_97% and dada2). We also found that the homogeneity of the relative abundances of most abundant genera among the replicates ( n = 18) differed according to pipeline. Overall, dada2 exhibited a significantly higher heterogeneity in bovine feces and soil replicates than mothur pipelines (Fig. ; mean Stdv), but mothur_97% showed the least heterogeneity. As our samples consisted of technical replicates ( n = 18) of one sample per environmental sample type (bovine feces and soil), and theoretically fungal community composition should be identical, we conclude that the pipeline with the highest homogeneity among all technical replicates (mothur_97%) would be the best suited to describe a fungal community and also – in comparable studies – to identify differences between different samples due to a lower intern variability of technical replicates. We also explored the variability in the number of private OTUs/ASVs (i.e. those found exclusively in individual replicates) among different pipelines. Results indicated that pipelines varied in their capacity to detect OTUs/ASVs in different sample types and depending on sample number. While both mothur pipelines (mothur_97%, mothur_99%) detected 31.8% of all possible OTUs for bovine feces or soil in a single replicate, dada2 only detected 18.5% of all ASVs per replicate (see Fig. A, B). This means that if the number of replicates per sample type was lowered to that similar to a field experiment with many sites (e.g. three replicates) only 32% of all ASVs (18 replicates) would be identified with dada2, whereas mothur_97% and mothur_99% would identify 46.2% of all OTUs (see Fig. ). This discrepancy is attributed to the distinct patterns observed in cumulative taxonomic numbers. The mothur pipelines, particularly mothur_97% for soil replicates, demonstrated a more efficient OTU/ASV detection with fewer replicates, which is advantageous. Notably, applying pipeline-specific recommendations for quality filtering and bootstrap cut-offs during sequence processing led to a plateau in OTU detection during stepwise addition of OTUs in case of the mothur pipelines, demonstrating sufficient detection even with fewer replicates. In contrast, dada2 showed an almost linear increase in ASVs, with a high number of private ASVs. The analysis of private OTUs/ASVs indicates that the mothur pipelines exhibit better OTU/ASV detection than dada2, and that mothur’s default settings during sequence processing result in sufficient OTU/ASV detection among technical replicates. Some studies have suggested that ecological patterns of microbiota are quite robust regardless of the bioinformatic pipeline used to analyse amplicon datasets . This conclusion was confirmed in our study. We showed that the pipelines used here consistently identified significantly higher alpha diversity estimates in bovine feces compared with soil samples (albeit not significantly for dada2 and mothur_97% richness estimates). However, we also showed that the consistency among pipelines was due to the adoption of the same settings (quality thresholds and bootstrapping cut-off), instead of default settings, in each pipeline during sequence processing. By applying the same settings, within each sample type, the alpha diversity estimates differed substantially between the pipelines: dada2 exhibited the lowest species richness followed by mothur_97% and mothur_99%. Previous studies have also found such discrepancies between different sets of pipelines and suggested that absolute estimates of species richness should not be overinterpreted and that metabarcoding studies should focus on the differences between samples . Overall, our comparison revealed homogeneity among pipelines regarding diversity conclusions. However, we showed that this homogeneity is disrupted by adopting default and/or recommended pipeline settings. This highlights the importance of carefully specifying settings for pipelines. Adaptations might have a great impact on the outcomes regarding fungal community analysis (e.g. species richness, OTU/ASV-detection). Despite the similar conclusions drawn by the pipelines, their alpha diversity measures differed significantly. One of the most important differences between pipelines is the clustering or data-filtering approach generating OTUs or ASVs, already well-recognized as estimating different, pipeline-specific results for bacterial communities . However, the discussion whether OTU clustering or ASV inference is better suited for fungal ITS metabarcoding data is ongoing . Although ASV approaches were shown to recover mock communities of several fungal strains better than OTU clustering approaches , they might overestimate fungal diversity when using markers with a high level of intraspecific variation (e.g. ITS2 subregion). Due to the high sensitivity of ASV approaches, allelic variants of the ITS region will be assigned to different ASVs and inflate the fungal diversity . On the other hand, ASV approaches likely underestimate the richness of less prevalent fungal species, due to removal of less abundant ASVs during ASV construction . In contrast, comparing mothur’s OTU clustering with ASV approaches for 16S rRNA amplicon data analysis have suggested that mothur tends to overestimates richness . Our findings indicate significantly lower fungal richness in bovine feces and soil samples processed with the ASV approach (dada2) compared with the OTU clustering methods (mothur_97%, mothur_99%). On the contrary, higher Shannon and Simpson indices were estimated for both sample types with dada2 compared with mothur pipelines. This confirms an overestimation of abundant ASVs with the ASV approach. In addition, rank abundance curves showed that dada2 identified a higher number of highly abundant ASVs than both mothur pipelines (OTUs), further confirming an overestimation of abundant ASVs (Figure ). In addition to the clustering or data-filtering approaches of the three pipelines, the mothur pipelines were applied here using two identity thresholds for OTU clustering, which also affected the alpha diversity measures. Fungal sequences are commonly clustered to OTUs based on 97% identity ; however, higher thresholds have been proposed as more appropriate for fungal data . Here, we applied both 97% and 99% similarity thresholds and found significant differences across the corresponding fungal community in bovine feces (but not soil). In fact, mothur_97% resulted in lower numbers of observed OTUs (richness), as well as lower estimates for both Shannon and Simpson indices than mothur_99% among biological replicates. Lower diversity observed with mothur_97% may result from the collapsing of erroneous sequences or intragenomic variations into other / fewer OTUs, as well as from aggregating distinct species due to the 97% similarity threshold. On the other hand, a higher similarity threshold (e.g. 99%) could have retained more ‘true’ species , but also OTUs that originate from intragenomic variation . Due to intragenomic variations, multiple copies of the ITS region can occur within one species . This heterogeneity complicates species identification with HTS approaches and might lead to an overestimation of species richness. This overestimation could be more severe when applying higher similarity thresholds, as more intragenomic variation will be incorrectly clustered into distinct OTUs ; for example, in our environmental samples, clustering with 99% similarity (mothur_99%) resulted in more than twice as many OTUs as with 97% similarity (mothur_97%) (1499 and 4293 OTUs in bovine feces, 1512 and 3317 OTUs in soil samples using 97% and 99% similarity thresholds; including rare OTUs). Likewise, other studies found more fungal OTUs in environmental samples when applying similarity thresholds higher than 97% . We assume that an incorrect allocation of intragenomic variations to distinct OTUs was higher when using the higher similarity threshold (99%) in our analysis, and this could, combined with sequencing errors, lead to erroneously high richness results . However, with the present sequencing data and available tools, we cannot confirm or correct these errors. Even if the ITS region is appropriate for species identification for a broad range of fungi, it is clearly not appropriate for all fungi due to their different rates of evolution . Increasing the taxonomic coverage of reference databases, performing large-scale species identifications and adapting existing bioinformatic pipelines might be solutions for dealing with intragenomic variations in future studies . In addition to intragenomic variation, PCR and sequencing errors possibly also result in the generation of rare (and false) OTUs , and removing these invalid OTUs is recommended . Although we minimized PCR errors by using a High Fidelity Polymerase with 3’-5’ proofreading activity, we removed rare OTUs/ASVs (using a relative abundance threshold, applied sample-wise) from our dataset to avoid overestimation of alpha diversity. We found that many OTUs (748 and 3274 OTUs in bovine feces, 981 and 2701 OTUs in soil samples using 97% and 99% similarity thresholds, respectively) were discarded by filtering these rare OTUs. With mothur_99%, filtering removed about 79% of all OTUs, which is a much higher percentage than for mothur_97% results (57% OTUs removed). This is probably due to the lower number of rare OTUs identified with mothur_97%, since applying lower similarity thresholds (here: 97% compared with 99% similarity) results in merging of rare OTUs with other low-abundance or abundant OTUs . Overall, this filtering step converged the total numbers of OTUs/ASVs observed with different similarity thresholds (mothur_97% and mothur_99%) and clustering methods (dada2) (Table , total number of OTUs/ASVs in Post-processing section compared with Following sample-wise removal of rare OTUs/ ASVs section). These results are in line with the findings of , where similar richness estimates among different pipelines were achieved by filtering of rare OTUs. Although the same classifier (RDP Naive Bayesian Classifier algorithm) and the same minimum bootstrap confidence value (80% cutoff) was used for the taxonomic assignment for all pipelines, the ratio of identified phyla and genera in single technical replicates differed among the pipelines. Dada2 classified a higher proportion of OTUs/ASVs to phylum and genus level in single replicates compared with mothur pipelines; however, considering absolute numbers, dada2 identified a lower number of phyla and genera than both mothur pipelines, which is in line with the richness results of OTUs/ASVs among pipelines. Data processing with the pipelines mothur_97% and mothur_99% identified similar numbers of genera. In both sample types, about half of the identified genera (50% in bovine feces, 60% in soil) were detected by all pipelines, however those shared genera account for > 94% of all sequences found in bovine feces and soil samples, respectively. We conclude that, despite differences in their relative abundance estimates (see below), abundant genera were detected by all pipelines, and mainly rare genera, exhibiting only low read counts, were assigned to one specific pipeline. In fact, every pipeline identified a few unique genera with low read counts (2–20 reads). These could have possibly emerged from sequencing errors and a more stringent filtering of rare OTUs/ASVs would have eliminated most of these unique genera. Generally, taxonomic assignments at such high taxonomic resolution should not be overvalued, as taxonomic identification of OTUs/ASVs might be inadequate . Nevertheless, dada2 and mothur_99% each identified one abundant genus (dada2: Calycina 6114 reads; mothur_99%: Preussia 647 reads) in soil, which both are saprotroph (-symbiotroph) and decay dung or wood . Considering the trophic modes, we find it plausible that these genera were present in our soil samples, making it concerning that the pipelines missed them. However, lowering the taxonomic resolution to the family level revealed that all three pipelines identified the families to which these genera belong, but did not classify the associated OTUs further to the genus level. For future comparative studies, lowering the bootstrap threshold in taxonomic classification (losing reliability of the given results) could be considered to retain more fungal genera. Coherently with the discussion above, in our comparison of community compositions among pipelines we focused here on the top 15 most abundant fungal genera in both bovine feces and soil technical replicates. We found that the majority of abundant genera exhibited significantly inconsistent proportions among different pipelines. Of particular note is that some genera (e.g. Calycina , Delitschia ) were found to be highly abundant either in bovine feces and soil replicates by one pipeline (dada2 and mothur_97%), but were not even identified in most replicates by another pipeline (mothur_97% and dada2). We also found that the homogeneity of the relative abundances of most abundant genera among the replicates ( n = 18) differed according to pipeline. Overall, dada2 exhibited a significantly higher heterogeneity in bovine feces and soil replicates than mothur pipelines (Fig. ; mean Stdv), but mothur_97% showed the least heterogeneity. As our samples consisted of technical replicates ( n = 18) of one sample per environmental sample type (bovine feces and soil), and theoretically fungal community composition should be identical, we conclude that the pipeline with the highest homogeneity among all technical replicates (mothur_97%) would be the best suited to describe a fungal community and also – in comparable studies – to identify differences between different samples due to a lower intern variability of technical replicates. We also explored the variability in the number of private OTUs/ASVs (i.e. those found exclusively in individual replicates) among different pipelines. Results indicated that pipelines varied in their capacity to detect OTUs/ASVs in different sample types and depending on sample number. While both mothur pipelines (mothur_97%, mothur_99%) detected 31.8% of all possible OTUs for bovine feces or soil in a single replicate, dada2 only detected 18.5% of all ASVs per replicate (see Fig. A, B). This means that if the number of replicates per sample type was lowered to that similar to a field experiment with many sites (e.g. three replicates) only 32% of all ASVs (18 replicates) would be identified with dada2, whereas mothur_97% and mothur_99% would identify 46.2% of all OTUs (see Fig. ). This discrepancy is attributed to the distinct patterns observed in cumulative taxonomic numbers. The mothur pipelines, particularly mothur_97% for soil replicates, demonstrated a more efficient OTU/ASV detection with fewer replicates, which is advantageous. Notably, applying pipeline-specific recommendations for quality filtering and bootstrap cut-offs during sequence processing led to a plateau in OTU detection during stepwise addition of OTUs in case of the mothur pipelines, demonstrating sufficient detection even with fewer replicates. In contrast, dada2 showed an almost linear increase in ASVs, with a high number of private ASVs. The analysis of private OTUs/ASVs indicates that the mothur pipelines exhibit better OTU/ASV detection than dada2, and that mothur’s default settings during sequence processing result in sufficient OTU/ASV detection among technical replicates. Overall, our study highlights the impact of bioinformatic pipeline selection on fungal metabarcoding data. The comparison revealed significant differences in the results obtained from commonly used pipelines, particularly when pipeline-specific default or recommended settings are used. We found that species richness in biological replicates was significantly higher in mothur pipelines (highest with mothur_99%) compared with dada2. The dada2 pipeline (ASV approach) showed the greatest heterogeneity of relative abundances and a poorer OTU/ASV detection across technical replicates ( n = 18) compared with mothur pipelines. In summary, we want to (i) generally draw attention to the great impact of pipeline settings on sufficient OTU/ASV detection and (ii) point out that the OTU approach outcompeted the ASV approach, due to a more efficient OTU detection and a great homogeneity among technical replicates. Hence, we recommend using a pipeline with OTU clustering (e.g. mothur_97%) and a careful reflection of respective pipeline settings for future studies. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 |
Women’s Health Information-Seeking Experiences and Preferences for Health Communications on FDA-Regulated Products: A Qualitative Study in Urban Area | 6ebf7ee5-c8e1-4725-b141-5207c406e645 | 10970415 | Health Communication[mh] | In general, women in the United States shoulder much of the responsibility for the health care of themselves and their families . In 2018, 89.3% of American women reported seeking services or advice from a health care facility . About 80.1% of American women aged 55 years or older have one or more chronic conditions . Despite women’s health care needs and the high utilization of health care services, women are also found to delay or not receive health care . Caregiving is an added burden on women. The 2017 Health Information National Trends Survey (HINTS) revealed that 64% of women look for health information to support someone . The unmet health needs and potential caregiving responsibilities among women underscore the importance of having ready access to health information to support the health-related decisions they make for themselves and for persons in their care. Accessing reliable health information, however, can be a major challenge for anyone in our concurrent digital age . Despite the importance of understanding the health information needs of women in older generations, limited research to date addresses their health information-seeking motives, perceptions, challenges, and preferences regarding FDA-regulated products. Understanding these health information-seeking elements is key to FDA’s mission for improving communication strategies and materials in order to help the public, including women, make better-informed health decisions. This study aimed to (a) identify motives, perceptions, challenges, and preferences among women in older generations for health information sources and materials related to FDA-regulated products, including drugs, vaccines, and medical devices; and (b) explore their preferred information sources and materials, including variations by generation and caregiving status. 2.1. Study Design In August and September 2018, we followed a modified grounded theory approach to conduct in-person focus groups among women from the Baltimore–Washington metropolitan area to elicit their motivations, challenges, and preferences toward health information sources and materials associated with FDA-regulated products. 2.2. Participants The study participants included women from three generations: Generation A (born 1965 to 1980), Generation B (born 1946 to 1964), and Generation C (born 1928 to 1945). Study eligibility criteria were (a) self-identified women and (b) women between 38 and 90 years old in 2018. 2.3. Materials and Methods Our moderators conducted semi-structured focus groups to probe for potential underlying assumptions that could give rise to particular views and opinions. Each focus group had two parts: first, participants spoke of their health information-seeking behaviors related to FDA-regulated products; second, participants shared their thoughts, preferences, and recommendations about three examples of communication methods that the FDA uses to disseminate health information to the public. These communication methods included a brochure describing the Vaccine Adverse Event Reporting System (VAERS), an FDA webpage describing safety communication for Biotin, and an FDA drug safety podcast explaining the adverse effects of a migraine patch. We assessed whether participants’ motivations, preferred information sources, and materials varied by product type. Participants were also given the opportunity to discuss other relevant thoughts or concerns and to provide input about other types of communication not previously discussed. 2.4. Recruitment Using a convenience sampling strategy, recruitment was conducted by the community engagement team of the Patient-Centered Involvement in Evaluating the Effectiveness of Treatments (PATIENTS) Program at the University of Maryland School of Pharmacy. The PATIENTS Program recruited women through their social networks and community-based organizations. The PATIENTS Program’s community-focused approach engages patients, care providers, and local communities in West Baltimore and beyond, especially those from underserved and minority populations and in patient-centered-outcomes research . The community engagement team collaborated with its network of community-based, faith-based, health care, and senior housing facilities across the Baltimore–Washington metropolitan area to recruit participants and host focus groups in locations convenient to participants, with the goal of recruiting six to eight participants for each focus group. Evidence from previous studies indicates that 80% of prevalent themes are discoverable within two to three focus groups, and 90% are discoverable within three to six focus groups . Thus, the goal was to conduct three to four focus groups per generation. At each collaborating site, we implemented the following recruitment steps: (a) identifying the appropriate age group for each focus group site; (b) identifying convenient focus group venues; (c) scheduling focus groups; (d) tailoring the recruitment flyer for each focus group; (e) reaching out to potential participants via in-person communications, phone calls, and newsletters; (f) screening interested women; and (g) enrolling women who met the eligibility criteria. 2.5. Procedure and Data Collection To ensure anonymity and cultivate the trust needed for an open discussion, we offered participants the option of not using their full names, using an alias, and not disclosing any identifiable health information during the discussions. For confidentiality, we asked participants to refrain from sharing information with anyone outside of the focus group. Each participant was provided a $40 gift card as a token of appreciation for their participation. Prior to each focus group discussion, participants completed a brief demographic survey. Three qualitative researchers facilitated the focus group discussions, and one researcher co-facilitated the discussions and took notes. Audio recordings of the focus groups were transcribed meaning to meaning, and all participant records were kept confidential. 2.6. Data Analysis Data collection and analysis occurred concurrently. Thematic data saturation was evaluated throughout this concurrent process by assessing whether new focus groups repeated the topics and themes expressed in other groups of the same generation. Transcripts, and facilitator and co-facilitator notes were imported into NVivo 11 ® software (QSR International, Burlington, MA, USA) for analysis. Following a stepwise inductive thematic analytic approach, two researchers developed a codebook for analysis, independently coded the focus group data, identified conceptual themes, and discussed discrepancies in coding. Themes were discussed among the research team for overall group consensus. Results were organized by the five main topics of the focus group guide: (1) motivations and purposes for seeking health information; (2) challenges in seeking health information; (3) preferred methods and sources for health information; (4) preferred communication materials for FDA-regulated products; and (5) suggestions to improve FDA communication materials. Additional topics that emerged during the discussions were also included. 2.7. Ethical Approval The University of Maryland, Baltimore Institutional Review Board (IRB), and the FDA IRB approved the study protocol. Informed consent was obtained from each participant before the beginning of each focus group. In August and September 2018, we followed a modified grounded theory approach to conduct in-person focus groups among women from the Baltimore–Washington metropolitan area to elicit their motivations, challenges, and preferences toward health information sources and materials associated with FDA-regulated products. The study participants included women from three generations: Generation A (born 1965 to 1980), Generation B (born 1946 to 1964), and Generation C (born 1928 to 1945). Study eligibility criteria were (a) self-identified women and (b) women between 38 and 90 years old in 2018. Our moderators conducted semi-structured focus groups to probe for potential underlying assumptions that could give rise to particular views and opinions. Each focus group had two parts: first, participants spoke of their health information-seeking behaviors related to FDA-regulated products; second, participants shared their thoughts, preferences, and recommendations about three examples of communication methods that the FDA uses to disseminate health information to the public. These communication methods included a brochure describing the Vaccine Adverse Event Reporting System (VAERS), an FDA webpage describing safety communication for Biotin, and an FDA drug safety podcast explaining the adverse effects of a migraine patch. We assessed whether participants’ motivations, preferred information sources, and materials varied by product type. Participants were also given the opportunity to discuss other relevant thoughts or concerns and to provide input about other types of communication not previously discussed. Using a convenience sampling strategy, recruitment was conducted by the community engagement team of the Patient-Centered Involvement in Evaluating the Effectiveness of Treatments (PATIENTS) Program at the University of Maryland School of Pharmacy. The PATIENTS Program recruited women through their social networks and community-based organizations. The PATIENTS Program’s community-focused approach engages patients, care providers, and local communities in West Baltimore and beyond, especially those from underserved and minority populations and in patient-centered-outcomes research . The community engagement team collaborated with its network of community-based, faith-based, health care, and senior housing facilities across the Baltimore–Washington metropolitan area to recruit participants and host focus groups in locations convenient to participants, with the goal of recruiting six to eight participants for each focus group. Evidence from previous studies indicates that 80% of prevalent themes are discoverable within two to three focus groups, and 90% are discoverable within three to six focus groups . Thus, the goal was to conduct three to four focus groups per generation. At each collaborating site, we implemented the following recruitment steps: (a) identifying the appropriate age group for each focus group site; (b) identifying convenient focus group venues; (c) scheduling focus groups; (d) tailoring the recruitment flyer for each focus group; (e) reaching out to potential participants via in-person communications, phone calls, and newsletters; (f) screening interested women; and (g) enrolling women who met the eligibility criteria. To ensure anonymity and cultivate the trust needed for an open discussion, we offered participants the option of not using their full names, using an alias, and not disclosing any identifiable health information during the discussions. For confidentiality, we asked participants to refrain from sharing information with anyone outside of the focus group. Each participant was provided a $40 gift card as a token of appreciation for their participation. Prior to each focus group discussion, participants completed a brief demographic survey. Three qualitative researchers facilitated the focus group discussions, and one researcher co-facilitated the discussions and took notes. Audio recordings of the focus groups were transcribed meaning to meaning, and all participant records were kept confidential. Data collection and analysis occurred concurrently. Thematic data saturation was evaluated throughout this concurrent process by assessing whether new focus groups repeated the topics and themes expressed in other groups of the same generation. Transcripts, and facilitator and co-facilitator notes were imported into NVivo 11 ® software (QSR International, Burlington, MA, USA) for analysis. Following a stepwise inductive thematic analytic approach, two researchers developed a codebook for analysis, independently coded the focus group data, identified conceptual themes, and discussed discrepancies in coding. Themes were discussed among the research team for overall group consensus. Results were organized by the five main topics of the focus group guide: (1) motivations and purposes for seeking health information; (2) challenges in seeking health information; (3) preferred methods and sources for health information; (4) preferred communication materials for FDA-regulated products; and (5) suggestions to improve FDA communication materials. Additional topics that emerged during the discussions were also included. The University of Maryland, Baltimore Institutional Review Board (IRB), and the FDA IRB approved the study protocol. Informed consent was obtained from each participant before the beginning of each focus group. A total of 109 women participated in 13 focus groups, and each discussion lasted 1.5 to 2 h. One-third or 33% of participants were from Generation A, 27% were from Generation B, and 40% were from Generation C . The majority (66%) of participants self-identified as African American, 36% identified as White, 3% preferred not to answer, and less than 2% identified as Native Hawaiian/Pacific Islander or Hispanic/Latino. About a quarter (24%) of participants self-identified as caregivers. Results were organized by the main topics of the focus group guide : (a) motivations and purposes for seeking health information; (b) perceptions about health information sources; (c) challenges in seeking health information; (d) preferred methods and sources for health information; and (e) preferred FDA health communication materials. Two unsolicited topics emerged from the discussions: (a) impact of religion and spirituality in health decision-making and (b) use of complementary and alternative medicine. These topics were spontaneously discussed by participants in the first focus group with Generation C and were also raised by participants in several of the subsequent focus groups. Thematic saturation was reached as no new themes were identified after the fourth focus group of each generation. Extracted themes and example quotes are reported below, along with differences found between generations and FDA-regulated products by caregiving status. Participants’ quotes do not necessarily reflect the opinions of the study researchers, FDA, or the United States Government. , , and list and compare subtopics and themes by generation. In these tables, a range of the number of times each theme was endorsed by each generation is presented: none; several (1–10 times); some (11–20 times); and many (21 times or more). Only verbally expressed opinions were reported in the tables. Non-verbal responses (e.g., head nodding) were not captured in the transcripts, and thus not included in these ranges. 3.1. Motivations and Purposes for Seeking Health Information Participants’ motivations and purposes for seeking health information were identified, differentiating between information for personal use and caregiving . 3.1.1. Information for Personal Use Understanding drug side effects, effectiveness, and interactions was the primary purpose for seeking health information. Participants across all three generations noted that their primary purpose when seeking health information is to understand their medications’ risks, necessities, and interactions. For instance, one participant stated “I’m allergic to a lot of medication. Even though the doctor can prescribe things, I can’t take anything with morphine, codeine, or any of that in it. So, I have to be my own guardian about that stuff. I have to read those things because I need to know what’s in it!” (Generation C) Participants expressed their concern about being overprescribed by providers and their need for informational support. One participant noted “I’ve been on two different kinds of high blood pressure pills for years. I’ve never understood why I’m on two. I would have thought a higher dose of one would make more sense. I just went to the clinic the other day and saw a new doctor and he gave me a third one! The bottle is still sitting there unopened. I’ve been asking people I know in the health industry, and some say I should, and some say I shouldn’t take it.” (Generation A) 3.1.2. Information for Caregiving to Others Younger women sought information as caregivers regarding their children’s vaccines, medications, and food allergies. As caregivers for young children, several participants from our Generation A and Generation B groups spoke about seeking information pertaining to the human papillomavirus (HPV) vaccine, pediatric medications, and food allergies from multiple sources, including their providers and the internet. Generation C participants did not report health information-seeking for caregiving purposes. A participant from Generation A stated “When it’s your kid, you do tend to read more about it before you say that, that medicine is OK for them, even if it’s an antibiotic. It’s something that you tend to read more about versus than saying OK, fine”. 3.1.3. Perceptions about Health Information Sources Participants discussed their opinions on health information from different sources, including the FDA, pharmaceutical industry, health care providers, and sites on the internet. Trust and trustworthiness were defining features of the participants’ perceptions about these entities . Trust and Trustworthiness The FDA was perceived as a reputable organization and the “FDA-approved” sign was perceived as trustworthy. Across all generations, several participants viewed the FDA as reputable and trusted FDA-approved products. Nevertheless, a few participants who expressed trust in the FDA were not familiar with the exact roles and responsibilities of the agency. Participants noted the following: “I trust it (FDA) because I’ve seen it all my life. Do I know what the FDA is? No. But I trust it because I’ve seen it on food, medical facilities, all that.” (Generation A) “There are less pictures (in FDA materials). It just seems more like a required document, like someone put time into it, there’s a format they follow, and standards. It makes me trust it more.” (Generation B) Perceived Conflict of Interest There was a perceived conflict of interest between the government, pharmaceutical industry, and health care providers. Of the participants indicating that the FDA was a trusted source of information, several from all three generations discussed the FDA’s perceived conflict of interest, which reduced their ability to trust information from these sources. Several discussions pointed to a lack of trust in the government in general, questioning the trustworthiness of the FDA as a government agency and its links to the pharmaceutical industry. These participants believe that the FDA had a financial incentive to support and advertise certain pharmaceutical products. In addition, several participants discussed their skepticism of FDA information because of past mistakes (e.g., drug recalls). Other participants said that they do not understand the role of the FDA and the agency’s regulatory procedures are not always transparent. For others, physicians were also viewed as untrustworthy, e.g., paid to recommend specific drugs. These two statements underscore participants’ skepticism: “When you hear things about drugs or devices on the news, why isn’t the FDA coming to us? Why can’t we hear about it before a mass lawsuit? It would get to consumers quicker so they can make informed decisions before hearing that 20,000 people died.” (Generation A) “The pharmaceutical companies and the FDA are all basically one and the same in many respects. The doctors, naturally, listen to the pharmaceutical companies. They owe—the borrower is server to the lender.” (Generation C) Financial Interest There was a lack of trust in the pharmaceutical industry based on financial interest. Several participants attributed their lack of trust in pharmaceutical companies to their perception that a number of these companies have unethical practices and intentionally raise prices of essential medications and devices. “A lot of people can’t afford diabetes medication but so many people need it! It seems like there are common illnesses and drug companies will jack the prices up on them.” (Generation C) Time and Transparency Familiar health care providers were noted as trustworthy, but it took time and transparency to build this trust. Although all generations spoke about generally trusting their health care providers (e.g., primary care physician), they expressed concern that many physicians may not have the most updated information about medications and adverse effects. Participants felt comfortable if they were able to ask their provider questions and build relationships over time. “For my personal doctor, I’ve been with her since I was 16. She has shared with me over the years what she does to keep herself apprised of the new information. I trust her now, but that’s a long-time relationship.” (Generation A) “I question every doctor. If you get an attitude or upset because I’m asking you a question about your profession, we’re done. Even when you go to the pharmacy, you have to know your health.” (Generation A) Verification of Internet Sources Generations A and C were skeptical about most health information they found on the internet and said that it should be considered with caution and verified by “reputable” sources. “I don’t always agree or trust what I get on the internet. I do some examinations for myself and then make decisions.” (Generation C) 3.2. Challenges in Seeking Health Information Several health information-seeking challenges were discussed in our focus groups. We categorized these challenges and examples under two main headings: (a) comprehension and (b) sufficiency . 3.2.1. Comprehension of Health Information Medical information was not written or formatted appropriately to be comprehended well by all patients. Participants from all generations very often agreed that health communication materials, particularly medication package inserts, are often not appropriate for all patients because the information is presented using technical language and a small-sized font. 3.2.2. Sufficiency of Health Information Overwhelming information was a barrier to finding specific health information. All three generations spoke about the difficulty of obtaining sufficient health-related information from one source. Participants often sought information from many sources, including different health care providers, internet sites, peers, and family members. Generations A and B expressed that many sources could be helpful. However, making sense of this voluminous information was overwhelming, as was the need for thorough validation to ensure that all the information was reliable and useful. For example, participants noted the following: “… I need to talk to every one of my doctors and figure out if the dose on mine is still good. Then I have to put all their information together because they won’t all agree. They won’t all say the same thing. I’ll come back around and ask more questions. A year later, I might have my answer or what I’m comfortable with.” (Generation A) “I find there’s an overload of information, not that there’s a lack of. You’re going to get 500 websites talking about whatever subject you put in. Then you have to filter through that to try to get the information that you want.” (Generation B) 3.3. Preferred Methods and Sources for Health Information Although our participants used the term “sources” to refer to the sources and methods of health information delivery, we differentiated them in our thematic analysis . 3.3.1. Preferred Methods In-Person and Live Interaction In-person and live interaction was the preferred method to receive health information. Participants from all generations identified in-person (e.g., face-to-face) and live (e.g., phone calls, telemedicine) interactions as the best method for obtaining answers to health-related questions. Examples of useful in-person and live interactions included speaking with health care professionals over the phone to answer specific health questions, asking pharmacists for details about drugs, and discussing diseases and health conditions with physicians. For instance, participants stated the following: “The nurses can usually explain to you what you’re taking the medicines for or if you have any other kind of issues. I would suggest they do the nurse line rather than the website.” (Generation B) “If they have available staff there to answer the question, then this would be a good thing. Some people do better talking with somebody on the phone than reading.” (Generation B) 3.3.2. Preferred Sources Personal Health Care Providers Overall, health care providers were the main and preferred source of health information on the three types of FDA-regulated products for all generations. Participants reflected on their personal interactions with their health care providers and said that the health information they receive through these interactions is the most useful and trustworthy. Although the FDA does not regulate the practice of medicine, some participants were concerned about their providers’ limited time and knowledge about their health concerns and the lack of communication between primary care and specialized providers. One participant said “Doctors don’t tell you everything. You’re in there for a 15-min office visit, you forget the question you wanted to ask, so you get home, look up everything you want to know, then when you go back to the doctor, you can go over it.” (Generation C) 3.3.3. Utilized Sources Internet Participants from all three generations reported frequent use of the internet to find health information. Some women in Generation A and a few in Generations B and C cited different purposes for using the internet, including confirming providers’ information, preparing for medical appointments, and finding general information about their symptoms. Many participants who spoke about internet use stated that a “Google search” was their gateway for internet searches. The following websites were mentioned by participants: WebMD, Mayo Clinic, health insurance companies, support groups, MedlinePlus, Dr. Weil, FDA, pharmaceutical companies, National Center for Homeopathy, ABC Homeopathy, YouTube, and Facebook. Typical responses were as follows: “First, I go to my internist or other specialty doctor, then I reinterpret what they tell me through Google.” (Generation C) “You can research your symptoms, see what type of medication they may give you, then you go to the doctor and you’re ready to hear the options. You already have some information you’ve collected for yourself. That’s how I prepare myself.” (Generation A) Social Media The youngest generation was most likely to use social media, particularly Facebook, to solicit medical advice or information from family members and peers. However, they said they were cautious when using information from Facebook to guide important health decisions. One participant noted “Another resource is Facebook. I don’t put a lot of private stuff on Facebook, but I’d ask if anybody knows anything about this.” (Generation A) Health Fairs, Workshops, and Health Expos Older generations received health information from health fairs, workshops, and health expos. Several participants from Generations B and C reported frequent participation in in-person educational venues (e.g., health fairs, workshops, and health expos). For instance, two participants explained as follows: “At the health fairs, they take time to explain it to you and answer questions as best they can.” (Generation B) “[The expo] is once a year, and they give you lots of information about shots and things going on to keep seniors healthy.” (Generation C) Newsletter The oldest generation was more likely to mention online and printed newsletters as their sources of information. Subscription newsletters included those from the National Institutes of Health (NIH), Cleveland and Mayo Clinics, Brigham and Women’s Hospital, UnitedHealthcare, Seniors Digest, Nutrition Today, and Bottom Line. One woman noted “I get a lot of information from the Women’s Hospital in Boston. They have a very good newsletter that comes out.” (Generation C) Family Members and Friends All generations mentioned asking health questions and sharing concerns with family members who work in the medical field (e.g., nurses and physicians). Generation C women spoke about asking younger family members to confirm via the internet the information they received from their physicians, particularly if they are not well-versed in technology and conducting online searches. For instance, participants stated the following: “My mom is a retired RN, so everybody in the family just goes to her with questions about health.” (Generation A) “I go to the doctor a lot. I ask questions even when I don’t understand. I ask them to explain it to me. I write it down. I take it to my daughter who helps explain it to me.” (Generation C) 3.4. Preferred Communication Materials for FDA-Regulated Products In the second part of each focus group, we presented three examples of communications (website, brochure, and podcast) that the FDA uses to disseminate health information. Presenting these examples to participants was intended to generate discussion and elicit thoughts and recommendations on preferred communication platforms. The feedback on these materials is as follows. 3.4.1. Websites Overall, participants from all generations found websites to be useful because they were comprehensive and available to all types of learners. One participant said “A lot of people love the internet. They are quick on it. The world is in your hands, right here in this phone.” (Generation A) Some participants noted that websites were particularly important when seeking information about prescribed drugs and their contraindications. A typical response was “Websites would be a good place to go to learn about side effects, dosages, what causes medicine interactions.” (Generation B) 3.4.2. Brochures Brochures were useful for older generations. Generations B and C discussed the benefits of brochures, including being available to reference them later, sharable with other people, a summary of information, a reference for finding more details, a physical/visual reminder from providers, and a tool for generating conversations with providers during health care visits. For instance, one participant noted “You know, [the doctor] can’t tell you everything in just 15 min so I think it’s a good thing. When you get home, you can read over something at your own pace.” (Generation C) Generation A expressed concerns about brochures, including advertising that makes content about vaccines or drugs untrustworthy, overuse of pictures that impact legitimacy, and wasting environmental resources. One woman noted “If the pictures were real, that would help. In 20–30 years, we might trust pamphlets more. Then paper will be obsolete, so there’s no point.” (Generation A) 3.4.3. Podcasts Podcasts were deemed the best for announcements, multitasking, and tech-savvy users. Although participants were skeptical about using podcasts, all generations referred to podcasts as a good source for disseminating public announcements and notifications (e.g., drug recalls) to aural learners and to those in multitasking situations. Older generations mentioned they would recommend podcasts to younger audiences who were technologically savvy. As one participant explained, “We can simultaneously do three or four things … You can pick it up through your multitasking … when you hear something that gets your attention, if you can go to it right then, you will.” (Generation B) 3.5. Suggestions to Improve FDA Communication Materials 3.5.1. Multiple Materials and Approaches Participants suggested that the FDA should employ a variety of communication modes, formats, and approaches given the variability in individual learning styles, skills, access, and preferences. One participant noted “I don’t think it’s all the same. I think diversity is what we need … We need to have different things. This might work... You might want to look at all three [methods] and draw something from each one.” (Generation B) 3.5.2. Website Enhancements Suggestions for improving the utility and accessibility of the FDA website included ensuring that the website appears at the top of Google searches for health topics, that the FDA website’s search engine is optimized, and that information is organized with drop-down menus. For example, participants said the following: “I didn’t realize till I heard about this project that there was a website [for FDA]. When I looked, I couldn’t believe how much information was there!” (Generation C) “You need to know what you’re looking for when you go to the FDA site. If you don’t, it’s overwhelming.” (Generation A) 3.6. Emergent Themes Two topics relevant to health information-seeking and health decision-making emerged during our focus groups without prompting from the moderators. We organized those discussions under two subtopics: (a) religion and spirituality in relation to health decision-making and (b) complementary and alternative medicine. 3.6.1. Religion and Spirituality Health decisions were often guided by religious beliefs and spirituality. Some participants across all generations cited their faith in God to take care of their health and reported that their religious practices and beliefs guide their health care decisions. Some participants in Generations A and C said their spirituality plays an important role in their health-related decision-making and that they engage in spiritual activities like meditation and yoga to improve their ability to handle their medical decisions. For example, one participant commented “I know the doctors are His helpers. That’s how I look at it. He’s got the first and last word when it comes to those decisions.” (Generation A) 3.6.2. Complementary and Alternative Medicine Participants had a preference for complementary and alternative medicine. Many participants from all generations said they or their family members use alternative medicine or remedies from other countries in addition to, or in place of, Western medicine. In the Generation C groups, some participants discussed the efficacy of these treatments and FDA’s role in their approval and regulation. Participants said the following: “I’ve been using holistic and natural remedies since 1974. Yes, sometimes we need allopathic and there are some really good doctors, but the FDA puts a lot of fear out there in order to keep the pharmaceutical companies going.” (Generation C) “My parents are 74 years old. They don’t look like it though. They’re more home remedy people. They don’t like the hospital and don’t want to go there. They have a home remedy book and they’ve passed it around my family. Some of these things really do work.” (Generation A) Participants’ motivations and purposes for seeking health information were identified, differentiating between information for personal use and caregiving . 3.1.1. Information for Personal Use Understanding drug side effects, effectiveness, and interactions was the primary purpose for seeking health information. Participants across all three generations noted that their primary purpose when seeking health information is to understand their medications’ risks, necessities, and interactions. For instance, one participant stated “I’m allergic to a lot of medication. Even though the doctor can prescribe things, I can’t take anything with morphine, codeine, or any of that in it. So, I have to be my own guardian about that stuff. I have to read those things because I need to know what’s in it!” (Generation C) Participants expressed their concern about being overprescribed by providers and their need for informational support. One participant noted “I’ve been on two different kinds of high blood pressure pills for years. I’ve never understood why I’m on two. I would have thought a higher dose of one would make more sense. I just went to the clinic the other day and saw a new doctor and he gave me a third one! The bottle is still sitting there unopened. I’ve been asking people I know in the health industry, and some say I should, and some say I shouldn’t take it.” (Generation A) 3.1.2. Information for Caregiving to Others Younger women sought information as caregivers regarding their children’s vaccines, medications, and food allergies. As caregivers for young children, several participants from our Generation A and Generation B groups spoke about seeking information pertaining to the human papillomavirus (HPV) vaccine, pediatric medications, and food allergies from multiple sources, including their providers and the internet. Generation C participants did not report health information-seeking for caregiving purposes. A participant from Generation A stated “When it’s your kid, you do tend to read more about it before you say that, that medicine is OK for them, even if it’s an antibiotic. It’s something that you tend to read more about versus than saying OK, fine”. 3.1.3. Perceptions about Health Information Sources Participants discussed their opinions on health information from different sources, including the FDA, pharmaceutical industry, health care providers, and sites on the internet. Trust and trustworthiness were defining features of the participants’ perceptions about these entities . Trust and Trustworthiness The FDA was perceived as a reputable organization and the “FDA-approved” sign was perceived as trustworthy. Across all generations, several participants viewed the FDA as reputable and trusted FDA-approved products. Nevertheless, a few participants who expressed trust in the FDA were not familiar with the exact roles and responsibilities of the agency. Participants noted the following: “I trust it (FDA) because I’ve seen it all my life. Do I know what the FDA is? No. But I trust it because I’ve seen it on food, medical facilities, all that.” (Generation A) “There are less pictures (in FDA materials). It just seems more like a required document, like someone put time into it, there’s a format they follow, and standards. It makes me trust it more.” (Generation B) Perceived Conflict of Interest There was a perceived conflict of interest between the government, pharmaceutical industry, and health care providers. Of the participants indicating that the FDA was a trusted source of information, several from all three generations discussed the FDA’s perceived conflict of interest, which reduced their ability to trust information from these sources. Several discussions pointed to a lack of trust in the government in general, questioning the trustworthiness of the FDA as a government agency and its links to the pharmaceutical industry. These participants believe that the FDA had a financial incentive to support and advertise certain pharmaceutical products. In addition, several participants discussed their skepticism of FDA information because of past mistakes (e.g., drug recalls). Other participants said that they do not understand the role of the FDA and the agency’s regulatory procedures are not always transparent. For others, physicians were also viewed as untrustworthy, e.g., paid to recommend specific drugs. These two statements underscore participants’ skepticism: “When you hear things about drugs or devices on the news, why isn’t the FDA coming to us? Why can’t we hear about it before a mass lawsuit? It would get to consumers quicker so they can make informed decisions before hearing that 20,000 people died.” (Generation A) “The pharmaceutical companies and the FDA are all basically one and the same in many respects. The doctors, naturally, listen to the pharmaceutical companies. They owe—the borrower is server to the lender.” (Generation C) Financial Interest There was a lack of trust in the pharmaceutical industry based on financial interest. Several participants attributed their lack of trust in pharmaceutical companies to their perception that a number of these companies have unethical practices and intentionally raise prices of essential medications and devices. “A lot of people can’t afford diabetes medication but so many people need it! It seems like there are common illnesses and drug companies will jack the prices up on them.” (Generation C) Time and Transparency Familiar health care providers were noted as trustworthy, but it took time and transparency to build this trust. Although all generations spoke about generally trusting their health care providers (e.g., primary care physician), they expressed concern that many physicians may not have the most updated information about medications and adverse effects. Participants felt comfortable if they were able to ask their provider questions and build relationships over time. “For my personal doctor, I’ve been with her since I was 16. She has shared with me over the years what she does to keep herself apprised of the new information. I trust her now, but that’s a long-time relationship.” (Generation A) “I question every doctor. If you get an attitude or upset because I’m asking you a question about your profession, we’re done. Even when you go to the pharmacy, you have to know your health.” (Generation A) Verification of Internet Sources Generations A and C were skeptical about most health information they found on the internet and said that it should be considered with caution and verified by “reputable” sources. “I don’t always agree or trust what I get on the internet. I do some examinations for myself and then make decisions.” (Generation C) Understanding drug side effects, effectiveness, and interactions was the primary purpose for seeking health information. Participants across all three generations noted that their primary purpose when seeking health information is to understand their medications’ risks, necessities, and interactions. For instance, one participant stated “I’m allergic to a lot of medication. Even though the doctor can prescribe things, I can’t take anything with morphine, codeine, or any of that in it. So, I have to be my own guardian about that stuff. I have to read those things because I need to know what’s in it!” (Generation C) Participants expressed their concern about being overprescribed by providers and their need for informational support. One participant noted “I’ve been on two different kinds of high blood pressure pills for years. I’ve never understood why I’m on two. I would have thought a higher dose of one would make more sense. I just went to the clinic the other day and saw a new doctor and he gave me a third one! The bottle is still sitting there unopened. I’ve been asking people I know in the health industry, and some say I should, and some say I shouldn’t take it.” (Generation A) Younger women sought information as caregivers regarding their children’s vaccines, medications, and food allergies. As caregivers for young children, several participants from our Generation A and Generation B groups spoke about seeking information pertaining to the human papillomavirus (HPV) vaccine, pediatric medications, and food allergies from multiple sources, including their providers and the internet. Generation C participants did not report health information-seeking for caregiving purposes. A participant from Generation A stated “When it’s your kid, you do tend to read more about it before you say that, that medicine is OK for them, even if it’s an antibiotic. It’s something that you tend to read more about versus than saying OK, fine”. Participants discussed their opinions on health information from different sources, including the FDA, pharmaceutical industry, health care providers, and sites on the internet. Trust and trustworthiness were defining features of the participants’ perceptions about these entities . Trust and Trustworthiness The FDA was perceived as a reputable organization and the “FDA-approved” sign was perceived as trustworthy. Across all generations, several participants viewed the FDA as reputable and trusted FDA-approved products. Nevertheless, a few participants who expressed trust in the FDA were not familiar with the exact roles and responsibilities of the agency. Participants noted the following: “I trust it (FDA) because I’ve seen it all my life. Do I know what the FDA is? No. But I trust it because I’ve seen it on food, medical facilities, all that.” (Generation A) “There are less pictures (in FDA materials). It just seems more like a required document, like someone put time into it, there’s a format they follow, and standards. It makes me trust it more.” (Generation B) Perceived Conflict of Interest There was a perceived conflict of interest between the government, pharmaceutical industry, and health care providers. Of the participants indicating that the FDA was a trusted source of information, several from all three generations discussed the FDA’s perceived conflict of interest, which reduced their ability to trust information from these sources. Several discussions pointed to a lack of trust in the government in general, questioning the trustworthiness of the FDA as a government agency and its links to the pharmaceutical industry. These participants believe that the FDA had a financial incentive to support and advertise certain pharmaceutical products. In addition, several participants discussed their skepticism of FDA information because of past mistakes (e.g., drug recalls). Other participants said that they do not understand the role of the FDA and the agency’s regulatory procedures are not always transparent. For others, physicians were also viewed as untrustworthy, e.g., paid to recommend specific drugs. These two statements underscore participants’ skepticism: “When you hear things about drugs or devices on the news, why isn’t the FDA coming to us? Why can’t we hear about it before a mass lawsuit? It would get to consumers quicker so they can make informed decisions before hearing that 20,000 people died.” (Generation A) “The pharmaceutical companies and the FDA are all basically one and the same in many respects. The doctors, naturally, listen to the pharmaceutical companies. They owe—the borrower is server to the lender.” (Generation C) Financial Interest There was a lack of trust in the pharmaceutical industry based on financial interest. Several participants attributed their lack of trust in pharmaceutical companies to their perception that a number of these companies have unethical practices and intentionally raise prices of essential medications and devices. “A lot of people can’t afford diabetes medication but so many people need it! It seems like there are common illnesses and drug companies will jack the prices up on them.” (Generation C) Time and Transparency Familiar health care providers were noted as trustworthy, but it took time and transparency to build this trust. Although all generations spoke about generally trusting their health care providers (e.g., primary care physician), they expressed concern that many physicians may not have the most updated information about medications and adverse effects. Participants felt comfortable if they were able to ask their provider questions and build relationships over time. “For my personal doctor, I’ve been with her since I was 16. She has shared with me over the years what she does to keep herself apprised of the new information. I trust her now, but that’s a long-time relationship.” (Generation A) “I question every doctor. If you get an attitude or upset because I’m asking you a question about your profession, we’re done. Even when you go to the pharmacy, you have to know your health.” (Generation A) Verification of Internet Sources Generations A and C were skeptical about most health information they found on the internet and said that it should be considered with caution and verified by “reputable” sources. “I don’t always agree or trust what I get on the internet. I do some examinations for myself and then make decisions.” (Generation C) The FDA was perceived as a reputable organization and the “FDA-approved” sign was perceived as trustworthy. Across all generations, several participants viewed the FDA as reputable and trusted FDA-approved products. Nevertheless, a few participants who expressed trust in the FDA were not familiar with the exact roles and responsibilities of the agency. Participants noted the following: “I trust it (FDA) because I’ve seen it all my life. Do I know what the FDA is? No. But I trust it because I’ve seen it on food, medical facilities, all that.” (Generation A) “There are less pictures (in FDA materials). It just seems more like a required document, like someone put time into it, there’s a format they follow, and standards. It makes me trust it more.” (Generation B) There was a perceived conflict of interest between the government, pharmaceutical industry, and health care providers. Of the participants indicating that the FDA was a trusted source of information, several from all three generations discussed the FDA’s perceived conflict of interest, which reduced their ability to trust information from these sources. Several discussions pointed to a lack of trust in the government in general, questioning the trustworthiness of the FDA as a government agency and its links to the pharmaceutical industry. These participants believe that the FDA had a financial incentive to support and advertise certain pharmaceutical products. In addition, several participants discussed their skepticism of FDA information because of past mistakes (e.g., drug recalls). Other participants said that they do not understand the role of the FDA and the agency’s regulatory procedures are not always transparent. For others, physicians were also viewed as untrustworthy, e.g., paid to recommend specific drugs. These two statements underscore participants’ skepticism: “When you hear things about drugs or devices on the news, why isn’t the FDA coming to us? Why can’t we hear about it before a mass lawsuit? It would get to consumers quicker so they can make informed decisions before hearing that 20,000 people died.” (Generation A) “The pharmaceutical companies and the FDA are all basically one and the same in many respects. The doctors, naturally, listen to the pharmaceutical companies. They owe—the borrower is server to the lender.” (Generation C) There was a lack of trust in the pharmaceutical industry based on financial interest. Several participants attributed their lack of trust in pharmaceutical companies to their perception that a number of these companies have unethical practices and intentionally raise prices of essential medications and devices. “A lot of people can’t afford diabetes medication but so many people need it! It seems like there are common illnesses and drug companies will jack the prices up on them.” (Generation C) Familiar health care providers were noted as trustworthy, but it took time and transparency to build this trust. Although all generations spoke about generally trusting their health care providers (e.g., primary care physician), they expressed concern that many physicians may not have the most updated information about medications and adverse effects. Participants felt comfortable if they were able to ask their provider questions and build relationships over time. “For my personal doctor, I’ve been with her since I was 16. She has shared with me over the years what she does to keep herself apprised of the new information. I trust her now, but that’s a long-time relationship.” (Generation A) “I question every doctor. If you get an attitude or upset because I’m asking you a question about your profession, we’re done. Even when you go to the pharmacy, you have to know your health.” (Generation A) Generations A and C were skeptical about most health information they found on the internet and said that it should be considered with caution and verified by “reputable” sources. “I don’t always agree or trust what I get on the internet. I do some examinations for myself and then make decisions.” (Generation C) Several health information-seeking challenges were discussed in our focus groups. We categorized these challenges and examples under two main headings: (a) comprehension and (b) sufficiency . 3.2.1. Comprehension of Health Information Medical information was not written or formatted appropriately to be comprehended well by all patients. Participants from all generations very often agreed that health communication materials, particularly medication package inserts, are often not appropriate for all patients because the information is presented using technical language and a small-sized font. 3.2.2. Sufficiency of Health Information Overwhelming information was a barrier to finding specific health information. All three generations spoke about the difficulty of obtaining sufficient health-related information from one source. Participants often sought information from many sources, including different health care providers, internet sites, peers, and family members. Generations A and B expressed that many sources could be helpful. However, making sense of this voluminous information was overwhelming, as was the need for thorough validation to ensure that all the information was reliable and useful. For example, participants noted the following: “… I need to talk to every one of my doctors and figure out if the dose on mine is still good. Then I have to put all their information together because they won’t all agree. They won’t all say the same thing. I’ll come back around and ask more questions. A year later, I might have my answer or what I’m comfortable with.” (Generation A) “I find there’s an overload of information, not that there’s a lack of. You’re going to get 500 websites talking about whatever subject you put in. Then you have to filter through that to try to get the information that you want.” (Generation B) Medical information was not written or formatted appropriately to be comprehended well by all patients. Participants from all generations very often agreed that health communication materials, particularly medication package inserts, are often not appropriate for all patients because the information is presented using technical language and a small-sized font. Overwhelming information was a barrier to finding specific health information. All three generations spoke about the difficulty of obtaining sufficient health-related information from one source. Participants often sought information from many sources, including different health care providers, internet sites, peers, and family members. Generations A and B expressed that many sources could be helpful. However, making sense of this voluminous information was overwhelming, as was the need for thorough validation to ensure that all the information was reliable and useful. For example, participants noted the following: “… I need to talk to every one of my doctors and figure out if the dose on mine is still good. Then I have to put all their information together because they won’t all agree. They won’t all say the same thing. I’ll come back around and ask more questions. A year later, I might have my answer or what I’m comfortable with.” (Generation A) “I find there’s an overload of information, not that there’s a lack of. You’re going to get 500 websites talking about whatever subject you put in. Then you have to filter through that to try to get the information that you want.” (Generation B) Although our participants used the term “sources” to refer to the sources and methods of health information delivery, we differentiated them in our thematic analysis . 3.3.1. Preferred Methods In-Person and Live Interaction In-person and live interaction was the preferred method to receive health information. Participants from all generations identified in-person (e.g., face-to-face) and live (e.g., phone calls, telemedicine) interactions as the best method for obtaining answers to health-related questions. Examples of useful in-person and live interactions included speaking with health care professionals over the phone to answer specific health questions, asking pharmacists for details about drugs, and discussing diseases and health conditions with physicians. For instance, participants stated the following: “The nurses can usually explain to you what you’re taking the medicines for or if you have any other kind of issues. I would suggest they do the nurse line rather than the website.” (Generation B) “If they have available staff there to answer the question, then this would be a good thing. Some people do better talking with somebody on the phone than reading.” (Generation B) 3.3.2. Preferred Sources Personal Health Care Providers Overall, health care providers were the main and preferred source of health information on the three types of FDA-regulated products for all generations. Participants reflected on their personal interactions with their health care providers and said that the health information they receive through these interactions is the most useful and trustworthy. Although the FDA does not regulate the practice of medicine, some participants were concerned about their providers’ limited time and knowledge about their health concerns and the lack of communication between primary care and specialized providers. One participant said “Doctors don’t tell you everything. You’re in there for a 15-min office visit, you forget the question you wanted to ask, so you get home, look up everything you want to know, then when you go back to the doctor, you can go over it.” (Generation C) 3.3.3. Utilized Sources Internet Participants from all three generations reported frequent use of the internet to find health information. Some women in Generation A and a few in Generations B and C cited different purposes for using the internet, including confirming providers’ information, preparing for medical appointments, and finding general information about their symptoms. Many participants who spoke about internet use stated that a “Google search” was their gateway for internet searches. The following websites were mentioned by participants: WebMD, Mayo Clinic, health insurance companies, support groups, MedlinePlus, Dr. Weil, FDA, pharmaceutical companies, National Center for Homeopathy, ABC Homeopathy, YouTube, and Facebook. Typical responses were as follows: “First, I go to my internist or other specialty doctor, then I reinterpret what they tell me through Google.” (Generation C) “You can research your symptoms, see what type of medication they may give you, then you go to the doctor and you’re ready to hear the options. You already have some information you’ve collected for yourself. That’s how I prepare myself.” (Generation A) Social Media The youngest generation was most likely to use social media, particularly Facebook, to solicit medical advice or information from family members and peers. However, they said they were cautious when using information from Facebook to guide important health decisions. One participant noted “Another resource is Facebook. I don’t put a lot of private stuff on Facebook, but I’d ask if anybody knows anything about this.” (Generation A) Health Fairs, Workshops, and Health Expos Older generations received health information from health fairs, workshops, and health expos. Several participants from Generations B and C reported frequent participation in in-person educational venues (e.g., health fairs, workshops, and health expos). For instance, two participants explained as follows: “At the health fairs, they take time to explain it to you and answer questions as best they can.” (Generation B) “[The expo] is once a year, and they give you lots of information about shots and things going on to keep seniors healthy.” (Generation C) Newsletter The oldest generation was more likely to mention online and printed newsletters as their sources of information. Subscription newsletters included those from the National Institutes of Health (NIH), Cleveland and Mayo Clinics, Brigham and Women’s Hospital, UnitedHealthcare, Seniors Digest, Nutrition Today, and Bottom Line. One woman noted “I get a lot of information from the Women’s Hospital in Boston. They have a very good newsletter that comes out.” (Generation C) Family Members and Friends All generations mentioned asking health questions and sharing concerns with family members who work in the medical field (e.g., nurses and physicians). Generation C women spoke about asking younger family members to confirm via the internet the information they received from their physicians, particularly if they are not well-versed in technology and conducting online searches. For instance, participants stated the following: “My mom is a retired RN, so everybody in the family just goes to her with questions about health.” (Generation A) “I go to the doctor a lot. I ask questions even when I don’t understand. I ask them to explain it to me. I write it down. I take it to my daughter who helps explain it to me.” (Generation C) In-Person and Live Interaction In-person and live interaction was the preferred method to receive health information. Participants from all generations identified in-person (e.g., face-to-face) and live (e.g., phone calls, telemedicine) interactions as the best method for obtaining answers to health-related questions. Examples of useful in-person and live interactions included speaking with health care professionals over the phone to answer specific health questions, asking pharmacists for details about drugs, and discussing diseases and health conditions with physicians. For instance, participants stated the following: “The nurses can usually explain to you what you’re taking the medicines for or if you have any other kind of issues. I would suggest they do the nurse line rather than the website.” (Generation B) “If they have available staff there to answer the question, then this would be a good thing. Some people do better talking with somebody on the phone than reading.” (Generation B) In-person and live interaction was the preferred method to receive health information. Participants from all generations identified in-person (e.g., face-to-face) and live (e.g., phone calls, telemedicine) interactions as the best method for obtaining answers to health-related questions. Examples of useful in-person and live interactions included speaking with health care professionals over the phone to answer specific health questions, asking pharmacists for details about drugs, and discussing diseases and health conditions with physicians. For instance, participants stated the following: “The nurses can usually explain to you what you’re taking the medicines for or if you have any other kind of issues. I would suggest they do the nurse line rather than the website.” (Generation B) “If they have available staff there to answer the question, then this would be a good thing. Some people do better talking with somebody on the phone than reading.” (Generation B) Personal Health Care Providers Overall, health care providers were the main and preferred source of health information on the three types of FDA-regulated products for all generations. Participants reflected on their personal interactions with their health care providers and said that the health information they receive through these interactions is the most useful and trustworthy. Although the FDA does not regulate the practice of medicine, some participants were concerned about their providers’ limited time and knowledge about their health concerns and the lack of communication between primary care and specialized providers. One participant said “Doctors don’t tell you everything. You’re in there for a 15-min office visit, you forget the question you wanted to ask, so you get home, look up everything you want to know, then when you go back to the doctor, you can go over it.” (Generation C) Overall, health care providers were the main and preferred source of health information on the three types of FDA-regulated products for all generations. Participants reflected on their personal interactions with their health care providers and said that the health information they receive through these interactions is the most useful and trustworthy. Although the FDA does not regulate the practice of medicine, some participants were concerned about their providers’ limited time and knowledge about their health concerns and the lack of communication between primary care and specialized providers. One participant said “Doctors don’t tell you everything. You’re in there for a 15-min office visit, you forget the question you wanted to ask, so you get home, look up everything you want to know, then when you go back to the doctor, you can go over it.” (Generation C) Internet Participants from all three generations reported frequent use of the internet to find health information. Some women in Generation A and a few in Generations B and C cited different purposes for using the internet, including confirming providers’ information, preparing for medical appointments, and finding general information about their symptoms. Many participants who spoke about internet use stated that a “Google search” was their gateway for internet searches. The following websites were mentioned by participants: WebMD, Mayo Clinic, health insurance companies, support groups, MedlinePlus, Dr. Weil, FDA, pharmaceutical companies, National Center for Homeopathy, ABC Homeopathy, YouTube, and Facebook. Typical responses were as follows: “First, I go to my internist or other specialty doctor, then I reinterpret what they tell me through Google.” (Generation C) “You can research your symptoms, see what type of medication they may give you, then you go to the doctor and you’re ready to hear the options. You already have some information you’ve collected for yourself. That’s how I prepare myself.” (Generation A) Social Media The youngest generation was most likely to use social media, particularly Facebook, to solicit medical advice or information from family members and peers. However, they said they were cautious when using information from Facebook to guide important health decisions. One participant noted “Another resource is Facebook. I don’t put a lot of private stuff on Facebook, but I’d ask if anybody knows anything about this.” (Generation A) Health Fairs, Workshops, and Health Expos Older generations received health information from health fairs, workshops, and health expos. Several participants from Generations B and C reported frequent participation in in-person educational venues (e.g., health fairs, workshops, and health expos). For instance, two participants explained as follows: “At the health fairs, they take time to explain it to you and answer questions as best they can.” (Generation B) “[The expo] is once a year, and they give you lots of information about shots and things going on to keep seniors healthy.” (Generation C) Newsletter The oldest generation was more likely to mention online and printed newsletters as their sources of information. Subscription newsletters included those from the National Institutes of Health (NIH), Cleveland and Mayo Clinics, Brigham and Women’s Hospital, UnitedHealthcare, Seniors Digest, Nutrition Today, and Bottom Line. One woman noted “I get a lot of information from the Women’s Hospital in Boston. They have a very good newsletter that comes out.” (Generation C) Family Members and Friends All generations mentioned asking health questions and sharing concerns with family members who work in the medical field (e.g., nurses and physicians). Generation C women spoke about asking younger family members to confirm via the internet the information they received from their physicians, particularly if they are not well-versed in technology and conducting online searches. For instance, participants stated the following: “My mom is a retired RN, so everybody in the family just goes to her with questions about health.” (Generation A) “I go to the doctor a lot. I ask questions even when I don’t understand. I ask them to explain it to me. I write it down. I take it to my daughter who helps explain it to me.” (Generation C) Participants from all three generations reported frequent use of the internet to find health information. Some women in Generation A and a few in Generations B and C cited different purposes for using the internet, including confirming providers’ information, preparing for medical appointments, and finding general information about their symptoms. Many participants who spoke about internet use stated that a “Google search” was their gateway for internet searches. The following websites were mentioned by participants: WebMD, Mayo Clinic, health insurance companies, support groups, MedlinePlus, Dr. Weil, FDA, pharmaceutical companies, National Center for Homeopathy, ABC Homeopathy, YouTube, and Facebook. Typical responses were as follows: “First, I go to my internist or other specialty doctor, then I reinterpret what they tell me through Google.” (Generation C) “You can research your symptoms, see what type of medication they may give you, then you go to the doctor and you’re ready to hear the options. You already have some information you’ve collected for yourself. That’s how I prepare myself.” (Generation A) The youngest generation was most likely to use social media, particularly Facebook, to solicit medical advice or information from family members and peers. However, they said they were cautious when using information from Facebook to guide important health decisions. One participant noted “Another resource is Facebook. I don’t put a lot of private stuff on Facebook, but I’d ask if anybody knows anything about this.” (Generation A) Older generations received health information from health fairs, workshops, and health expos. Several participants from Generations B and C reported frequent participation in in-person educational venues (e.g., health fairs, workshops, and health expos). For instance, two participants explained as follows: “At the health fairs, they take time to explain it to you and answer questions as best they can.” (Generation B) “[The expo] is once a year, and they give you lots of information about shots and things going on to keep seniors healthy.” (Generation C) The oldest generation was more likely to mention online and printed newsletters as their sources of information. Subscription newsletters included those from the National Institutes of Health (NIH), Cleveland and Mayo Clinics, Brigham and Women’s Hospital, UnitedHealthcare, Seniors Digest, Nutrition Today, and Bottom Line. One woman noted “I get a lot of information from the Women’s Hospital in Boston. They have a very good newsletter that comes out.” (Generation C) All generations mentioned asking health questions and sharing concerns with family members who work in the medical field (e.g., nurses and physicians). Generation C women spoke about asking younger family members to confirm via the internet the information they received from their physicians, particularly if they are not well-versed in technology and conducting online searches. For instance, participants stated the following: “My mom is a retired RN, so everybody in the family just goes to her with questions about health.” (Generation A) “I go to the doctor a lot. I ask questions even when I don’t understand. I ask them to explain it to me. I write it down. I take it to my daughter who helps explain it to me.” (Generation C) In the second part of each focus group, we presented three examples of communications (website, brochure, and podcast) that the FDA uses to disseminate health information. Presenting these examples to participants was intended to generate discussion and elicit thoughts and recommendations on preferred communication platforms. The feedback on these materials is as follows. 3.4.1. Websites Overall, participants from all generations found websites to be useful because they were comprehensive and available to all types of learners. One participant said “A lot of people love the internet. They are quick on it. The world is in your hands, right here in this phone.” (Generation A) Some participants noted that websites were particularly important when seeking information about prescribed drugs and their contraindications. A typical response was “Websites would be a good place to go to learn about side effects, dosages, what causes medicine interactions.” (Generation B) 3.4.2. Brochures Brochures were useful for older generations. Generations B and C discussed the benefits of brochures, including being available to reference them later, sharable with other people, a summary of information, a reference for finding more details, a physical/visual reminder from providers, and a tool for generating conversations with providers during health care visits. For instance, one participant noted “You know, [the doctor] can’t tell you everything in just 15 min so I think it’s a good thing. When you get home, you can read over something at your own pace.” (Generation C) Generation A expressed concerns about brochures, including advertising that makes content about vaccines or drugs untrustworthy, overuse of pictures that impact legitimacy, and wasting environmental resources. One woman noted “If the pictures were real, that would help. In 20–30 years, we might trust pamphlets more. Then paper will be obsolete, so there’s no point.” (Generation A) 3.4.3. Podcasts Podcasts were deemed the best for announcements, multitasking, and tech-savvy users. Although participants were skeptical about using podcasts, all generations referred to podcasts as a good source for disseminating public announcements and notifications (e.g., drug recalls) to aural learners and to those in multitasking situations. Older generations mentioned they would recommend podcasts to younger audiences who were technologically savvy. As one participant explained, “We can simultaneously do three or four things … You can pick it up through your multitasking … when you hear something that gets your attention, if you can go to it right then, you will.” (Generation B) Overall, participants from all generations found websites to be useful because they were comprehensive and available to all types of learners. One participant said “A lot of people love the internet. They are quick on it. The world is in your hands, right here in this phone.” (Generation A) Some participants noted that websites were particularly important when seeking information about prescribed drugs and their contraindications. A typical response was “Websites would be a good place to go to learn about side effects, dosages, what causes medicine interactions.” (Generation B) Brochures were useful for older generations. Generations B and C discussed the benefits of brochures, including being available to reference them later, sharable with other people, a summary of information, a reference for finding more details, a physical/visual reminder from providers, and a tool for generating conversations with providers during health care visits. For instance, one participant noted “You know, [the doctor] can’t tell you everything in just 15 min so I think it’s a good thing. When you get home, you can read over something at your own pace.” (Generation C) Generation A expressed concerns about brochures, including advertising that makes content about vaccines or drugs untrustworthy, overuse of pictures that impact legitimacy, and wasting environmental resources. One woman noted “If the pictures were real, that would help. In 20–30 years, we might trust pamphlets more. Then paper will be obsolete, so there’s no point.” (Generation A) Podcasts were deemed the best for announcements, multitasking, and tech-savvy users. Although participants were skeptical about using podcasts, all generations referred to podcasts as a good source for disseminating public announcements and notifications (e.g., drug recalls) to aural learners and to those in multitasking situations. Older generations mentioned they would recommend podcasts to younger audiences who were technologically savvy. As one participant explained, “We can simultaneously do three or four things … You can pick it up through your multitasking … when you hear something that gets your attention, if you can go to it right then, you will.” (Generation B) 3.5.1. Multiple Materials and Approaches Participants suggested that the FDA should employ a variety of communication modes, formats, and approaches given the variability in individual learning styles, skills, access, and preferences. One participant noted “I don’t think it’s all the same. I think diversity is what we need … We need to have different things. This might work... You might want to look at all three [methods] and draw something from each one.” (Generation B) 3.5.2. Website Enhancements Suggestions for improving the utility and accessibility of the FDA website included ensuring that the website appears at the top of Google searches for health topics, that the FDA website’s search engine is optimized, and that information is organized with drop-down menus. For example, participants said the following: “I didn’t realize till I heard about this project that there was a website [for FDA]. When I looked, I couldn’t believe how much information was there!” (Generation C) “You need to know what you’re looking for when you go to the FDA site. If you don’t, it’s overwhelming.” (Generation A) Participants suggested that the FDA should employ a variety of communication modes, formats, and approaches given the variability in individual learning styles, skills, access, and preferences. One participant noted “I don’t think it’s all the same. I think diversity is what we need … We need to have different things. This might work... You might want to look at all three [methods] and draw something from each one.” (Generation B) Suggestions for improving the utility and accessibility of the FDA website included ensuring that the website appears at the top of Google searches for health topics, that the FDA website’s search engine is optimized, and that information is organized with drop-down menus. For example, participants said the following: “I didn’t realize till I heard about this project that there was a website [for FDA]. When I looked, I couldn’t believe how much information was there!” (Generation C) “You need to know what you’re looking for when you go to the FDA site. If you don’t, it’s overwhelming.” (Generation A) Two topics relevant to health information-seeking and health decision-making emerged during our focus groups without prompting from the moderators. We organized those discussions under two subtopics: (a) religion and spirituality in relation to health decision-making and (b) complementary and alternative medicine. 3.6.1. Religion and Spirituality Health decisions were often guided by religious beliefs and spirituality. Some participants across all generations cited their faith in God to take care of their health and reported that their religious practices and beliefs guide their health care decisions. Some participants in Generations A and C said their spirituality plays an important role in their health-related decision-making and that they engage in spiritual activities like meditation and yoga to improve their ability to handle their medical decisions. For example, one participant commented “I know the doctors are His helpers. That’s how I look at it. He’s got the first and last word when it comes to those decisions.” (Generation A) 3.6.2. Complementary and Alternative Medicine Participants had a preference for complementary and alternative medicine. Many participants from all generations said they or their family members use alternative medicine or remedies from other countries in addition to, or in place of, Western medicine. In the Generation C groups, some participants discussed the efficacy of these treatments and FDA’s role in their approval and regulation. Participants said the following: “I’ve been using holistic and natural remedies since 1974. Yes, sometimes we need allopathic and there are some really good doctors, but the FDA puts a lot of fear out there in order to keep the pharmaceutical companies going.” (Generation C) “My parents are 74 years old. They don’t look like it though. They’re more home remedy people. They don’t like the hospital and don’t want to go there. They have a home remedy book and they’ve passed it around my family. Some of these things really do work.” (Generation A) Health decisions were often guided by religious beliefs and spirituality. Some participants across all generations cited their faith in God to take care of their health and reported that their religious practices and beliefs guide their health care decisions. Some participants in Generations A and C said their spirituality plays an important role in their health-related decision-making and that they engage in spiritual activities like meditation and yoga to improve their ability to handle their medical decisions. For example, one participant commented “I know the doctors are His helpers. That’s how I look at it. He’s got the first and last word when it comes to those decisions.” (Generation A) Participants had a preference for complementary and alternative medicine. Many participants from all generations said they or their family members use alternative medicine or remedies from other countries in addition to, or in place of, Western medicine. In the Generation C groups, some participants discussed the efficacy of these treatments and FDA’s role in their approval and regulation. Participants said the following: “I’ve been using holistic and natural remedies since 1974. Yes, sometimes we need allopathic and there are some really good doctors, but the FDA puts a lot of fear out there in order to keep the pharmaceutical companies going.” (Generation C) “My parents are 74 years old. They don’t look like it though. They’re more home remedy people. They don’t like the hospital and don’t want to go there. They have a home remedy book and they’ve passed it around my family. Some of these things really do work.” (Generation A) Understanding women’s motivations for seeking information, perceptions of the validity and usefulness of information sources, challenges in finding and understanding information, and preferred health information sources may help to improve the FDA’s regulated product health communications for women . According to the findings from this study, the specific information that women seek and how they wish to receive it depended on their motives for seeking the information, as well as their individual preferences. The primary reason that women sought health information was to locate reliable data about prescribed medications. The primary reason caregivers of children sought health information was to find reliable data about children’s vaccines, medications, and food allergies. How women seek health information varied across individuals, in part because of their ages . Our findings suggested there are generational differences in the most used information sources and resources. For instance, younger women preferred internet sources and older women preferred in-person educational venues, friends, and family members. The following key findings are based on our analysis of the focus group interviews. First, trust in health information sources was an important topic in our discussions. While several participants perceived the FDA as a reputable organization, a lack of trust in the government and the agency’s perceived conflict of interest with pharmaceutical companies jeopardize this positive image. Nonetheless, participants’ greater trust in the FDA compared to the federal government resembles the findings of Kowitt et al. (2017), which showed that 62.5% of adult Americans trusted the FDA, while only 42.9% trusted the federal government . Similar findings showed that less than 20% of Americans reported that they trusted the federal government, compared to the 70% that reported that they viewed the Centers for Disease Control and Prevention (CDC) favorably . In addition, evidence indicates that official governmental bodies are effective in fighting health misconceptions via providing corrective information . In addition, previous research suggests that the authority of the health information owner has a positive impact on both its credibility and trust . Collectively, this suggests that public trust in the FDA presents an opportunity for the agency to leverage its reputation as a leading source of reliable and validated health information. A second important topic was information sources . All generations identified in-person and live interaction as their most preferred method and their health care providers as their most preferred source for health information. All other sources, including the internet, social media, newsletters, family members, and friends, came next in preference. In addition, participants reported facing challenges in finding and understanding health-related information from various sources. For example, with the increasing access to and use of the internet, participants spoke of being overwhelmed with health information and having to discern useful and truthful information. Others reported receiving conflicting information from their primary care provider and specialists, contributing to frustration and confusion. Additionally, many participants in Generation A, some in Generation B, and several in Generation C indicated that the FDA must consider appropriate reading levels to address health literacy needs for all patients. Another approach supported by this study is coordinating with health care providers, faith communities, and community health fairs to facilitate patients seeking health information from the FDA via in-person or live interactions. A key finding of the focus group discussions was the general suggestion for the FDA to consider diversifying its communication materials to reflect the variety of sources and materials that different generations use. Suggestions for improving FDA’s website underscore the need to improve the layout and features of health information websites, which would make it easier for diverse populations to understand . Improving the website readability and simplifying its content would improve the perception of the information without impacting the trust in the source . One way to improve access to the FDA’s health communications is to make its website more prominent on internet search engines . Each generational group in our study used available health information sources in its own way. Older generations reported greater use of in-person educational venues (e.g., health fairs and newsletters). The youngest generation reported more frequent use of the internet, including social media, compared to older generations. These findings are consistent with data from the Women’s Health Initiative cohort study indicating that only 60% of women aged 65 and older used the internet as a source of health information, and that women who used the internet as an information source were more likely to be younger, be non-Hispanic White, earn a higher income, have a higher educational level, and live with a partner . Specifically, participants reported receiving health information and advice from their friends and peers, partially through social media. These findings echo those from the HINTS 2013–2017 analysis, which showed that younger populations were more likely to use social media for health communication and that women tended to share health information on social media and online support groups for people with similar health conditions . The impact of religion and spirituality on participants’ health decisions were not surprising. Previous research found that religious beliefs can guide decision-making among older adults and represent a major coping aid during and after medical decision-making for critically ill patients . Similarly, the preference for complementary and alternative medicine aligns with the National Health Interview Surveys’ results that showed increases in the use of yoga, meditation, and chiropractic therapy from 2012 to 2017 among American adults . Limitations As with all focus group studies, results from our study should not be generalized to the broader population and should be comprehended within our participants’ demographics . Another limitation of our study is its lack of demographic and geographic diversity; more than 60% of participants were African Americans, and all participants were from the Baltimore–Washington area. Another limitation was that due to time constraints, we focused discussions only on caregiving for children and not on caregiving for spouses, parents, or older family members. The examples of FDA health communication materials (brochure, webpage, and podcast) were provided to showcase different health communication methods and may not be representative of all FDA health communications. Note that the content is different in each example. Some participants likely were not familiar with specific products in the examples, whereas those who go on the FDA website might seek information on products for their conditions. It is possible that the product, subject matter, or content of the communication method had an impact on participant preference. Therefore, feedback collected from participants on the examples may not be generalizable to all FDA communications, particularly as the FDA website is dynamic and the agency continues to enhance their public-facing communications. Finally, these data were collected before the COVID-19 outbreak. FDA has been a visible and important federal agency during the COVID-19 pandemic, especially on the topic of vaccines. Consequently, the use of the FDA website might have changed during the COVID-19 pandemic. Addressing these limitations may help to advance future research in this area. As with all focus group studies, results from our study should not be generalized to the broader population and should be comprehended within our participants’ demographics . Another limitation of our study is its lack of demographic and geographic diversity; more than 60% of participants were African Americans, and all participants were from the Baltimore–Washington area. Another limitation was that due to time constraints, we focused discussions only on caregiving for children and not on caregiving for spouses, parents, or older family members. The examples of FDA health communication materials (brochure, webpage, and podcast) were provided to showcase different health communication methods and may not be representative of all FDA health communications. Note that the content is different in each example. Some participants likely were not familiar with specific products in the examples, whereas those who go on the FDA website might seek information on products for their conditions. It is possible that the product, subject matter, or content of the communication method had an impact on participant preference. Therefore, feedback collected from participants on the examples may not be generalizable to all FDA communications, particularly as the FDA website is dynamic and the agency continues to enhance their public-facing communications. Finally, these data were collected before the COVID-19 outbreak. FDA has been a visible and important federal agency during the COVID-19 pandemic, especially on the topic of vaccines. Consequently, the use of the FDA website might have changed during the COVID-19 pandemic. Addressing these limitations may help to advance future research in this area. Overall, our findings suggest broadly the ways that the FDA can improve regulated medical product health communications intended for women. This focus group study indicated four specific options to consider. First, public trust in the FDA logo suggests that the agency could leverage its image as a leading source of reliable and validated health information. Second, in-person and live interactions were found to be the preferred method to receive health information. The FDA could explore ways to support these communications which may also enhance the image of the agency’s direct communications with patients and caregivers. Third, the FDA could investigate ways to be more visible on major internet search engines and identify ways to improve their website navigation. Lastly, the FDA could offer different media and communication strategies for conveying health information that accommodate different preferences for a variety of women, including images of diverse women in terms of race/ethnicity and age. Our findings should inform future quantitative research, such as a national survey, aimed at collecting nationally representative data to identify new strategies for improving health communications designed for women in the United States and to further investigate the nuances among preferences for health communications for the various types of FDA-regulated products. |
Development and psychometric testing of a new instrument to measure factors influencing women’s breast cancer prevention behaviors (ASSISTS) | 36f31a18-ab22-4e23-b392-a7e839879ce3 | 4957322 | Preventive Medicine[mh] | Cancer is now the leading cause of death worldwide. It has a social impact on patients’ lives . In addition, breast cancer is an increasingly global public health problem that has noticeable influences on the daily activities of patients. It is the most common type of cancer among females and the leading cause of cancer death in women . In Iran, breast cancer is the cancer that is most frequently diagnosed in women. The literature shows that it affects Iranian women about one decade earlier than women in developed countries . The incidence rate of breast cancer in Iranian women is 24.6 % of all cancers, and most of the women (67.6 %) are between 35 and 60 years of age . Several risk factors may increase the chance of developing breast cancer, and lifestyle factors have a major effect on this field. Therefore, it can be reasoned that an effective approach to decrease the burden of breast cancer is prevention. It has been proposed that a suitable procedure for breast cancer prevention is preventive behaviors such as healthy lifestyle and screening , because there is evidence that increased physical activity due to positive lifestyle changes might help to prevent breast cancer and reduce the incidence of breast cancer . Lifestyle changes include increased intake of healthy diet, decreased alcohol consumption and increased exercise . Screening behaviors such as breast self-examination (BSE), mammography and clinical breast examination (CBE) are also considered to be a valuable method of early detection and a way to reduce mortality rates . However, by our own experiences, we observed that most Iranian women do not perform breast cancer screening behaviors because the Iranian Ministry of Health does not offer any national population-based screening programs for women. Few studies have considered behaviors related to breast cancer prevention in Iranian women . To make changes happen, understanding individuals’ health behaviors in regard to specific health issues is essential. Reviews on health-related behaviors have indicated that women will commonly not attempt to take preventive measures unless they have slight levels of related support, motivation and information . In addition, studies have shown that persons will be more likely to take part in the suggested behaviors if they improve their self-efficacy abilities to change their unhealthy behaviors . As a result, in order to develop effective interventions for improving breast cancer preventive behaviors, the predictive factors of these behaviors need to be recognized. At present, there exists no comprehensive, validated questionnaire on this topic. Thus, the purpose of the current paper was to develop and examine the psychometric properties of a newly developed instrument, called the ASSISTS, that can be used to explore factors influencing Iranian women’s behaviors for breast cancer prevention and perhaps show areas for applying interventions to increase preventive behaviors among women. To establish the validity of our instrument, the relationship between the scale scores of our instrument will be associated with the scores of four potentially associated constructs, namely perceived social support, cancer attitude, self-efficacy and stress management with regard to promoting a healthy lifestyle. Research design This study was approved by the Ethics Committee of Tehran University of Medical Sciences [Grant number 22847] and all participants completed informed written consent. The study was conducted in two phases. In the first phase, we started by generating items and developing the instruments. A secondary analysis of previous qualitative data was done to provide an initial indication of candidate items, to generate relevant items, to evaluate face and content validity, and to determine the most appropriate phrasing. The second phase was a testing phase, involving cross-sectional studies with women. We carried out both exploratory factor analysis and confirmatory factor analysis, and tested the convergent and discriminant validity and the internal consistency of the scale. Thereafter, test-retest reliability was examined using an independent sample of 25 women. Phase 1: item generation and scale development phase This study was carried out to develop a scale for measuring factors influencing women’s breast cancer prevention behaviors. Items were derived from secondary analysis from a previous qualitative research conducted by Khazaee-Pool in which Iranian women’s experiences about breast cancer preventive behaviors were explored . Based on the secondary analysis, by Graneheim method , five main themes and 29 subthemes were considered to be key factors relating to breast cancer preventive behaviors. The framework is provided in Table . The item pool contained 97 items at this point. The content of the items was made clear, and extra items were omitted through discussion. The main investigator and other researchers read items and removed extra items. Finally, the first draft of the scale was developed and consisted of 58 items. Each item was rated on a five-point response scale anchored at 1 = never to 5 = always . Thereafter, content and face validity were examined to develop the pre-final version of the instrument. Content validity Both qualitative and quantitative content validity were examined. In the qualitative stage, a scientific expert panel (i.e., a team of investigators specialized in health education, breast cancer and psychometrics) assessed the content validity of the scale. The expert panel evaluated the wording, grammar, item allocation and scaling of the scale. In the quantitative stage, both the content validity index (CVI) and the content validity ratio (CVR) were calculated. The clarity, simplicity and relevance of each item were measured by the CVI . In order to calculate the CVI, a Likert-type ordinal scale with four possible responses was applied. The answers were rated from 1 = not relevant, not simple and not clear to 4 = very relevant, very simple and very clear . The CVI was assessed as the proportion of items that received a rating of 3 or 4 by the experts . A CVI score below .80 for an item was not acceptable . The CVR tested the essentiality of the items. To assess the CVR, the expert panel scored each item as 1 = essential , 2 = useful but not essential , or 3 = not essential . Then, based on the Lawshe Table , items with a CVR score of 0.62 or above were considered to be acceptable and were retained. In the quantitative stage, items with a CVR and a CVI less than .62 and .80, respectively, were deleted. In total, 9 items were deleted, resulting in a 49-item pool. The expert panel also revised the instrument with regard to grammar, wording and item allocation. For example, the sentence “Breast cancer destroys my femininity” was changed to “If I get breast cancer, my feminine identity will be lost”. The 49-item pool remained in the analyses below and consisted of positively worded and negatively worded statements with five response options: 1 = never , 2 = rarely , 3 = sometimes , 4 = often , and 5 = always. Face validity Both qualitative and quantitative methods were used to assess face validity. A group of women ( n = 10) were asked to evaluate each item of the questionnaire and to indicate if they felt ambiguity or difficulty in replying to the Iranian version of the ASSISTS questionnaire. Based on the participants’ viewpoints, the ambiguous items were adapted. In a quantitative phase, the impact score (frequency × importance) was assessed to show the percentage of women who identified each item as important or quite important on a five-point Likert scale. Items were considered to be appropriate if they had an impact score equal to or more than 1.5 (which corresponds to a mean frequency of 50 % and a mean importance of three on the five-point Likert scale) . In conclusion, all items had an impact score higher than 1.5. The range of impact score was from 1.9 to 5. None of the items were omitted, and the first form of the questionnaire containing 49 items was established for the next phase of psychometric evaluation. In other words, the group of women indicated that they experienced no difficulties reading and understanding the 49 items. Phase 2: testing phase The main study and the data collection In order to test the psychometric properties of the ASSISTS scale in a wider setting, a cross-sectional study was designed to be carried out in Tehran, Iran, from February 2012 to September 2014. A multistage cluster sampling was used. Firstly, Tehran (the capital of Iran) was separated into five areas: north, south, west, east and central. All health centers located in these five areas that were affiliated to the Tehran University of Medical Sciences were recognized. Then five health centers in each area were randomly chosen. Participants who visited health centers affiliated to Tehran University of Medical Sciences were entered into the study if they were 30 years old or older, literate and healthy (i.e., having no history of breast cancer) and wanted to take part in the study. After the first author conducted a short interview and provided information about the aim of the study, women who accepted to participate in the study completed the ASSISTS scale. Besides the study scale, the demographic characteristics of participants including employment status, educational level and marital status were also collected. In order to collect data, educated investigators performed face-to-face interviews. Measures To establish the validity of the ASSIST, we also administered the following scales from a group of women: The Multidimensional Scale of Perceived Social Support, the Cancer Attitude Scale, the Generalized Self-Efficacy Scale and the Stress Management Scale with regard to a health-promoting lifestyle. The Multidimensional Scale of Perceived Social Support (MSPSS) is a brief instrument developed to assess perceptions of support from three sources: family, friends and a significant other. The MSPSS comprises a total of 12 items, with four items for each of three subscales. Each item was valued on a seven-point Likert-type scale, ranging from 1 = very strongly disagree to 7 = very strongly agree . In several studies, the MSPSS has been presented to have good internal and test-retest reliability, good validity and a fairly stable factorial structure. It has been translated into many languages, including Farsi (Persian) . The minimum and maximum scores of the questionnaire are 12 and 84, respectively. A higher score indicates greater perceived social support. A score of 65 or less is considered the cutoff point for eligibility of services. The Cronbach’s alpha coefficient for the total scale was .81, indicating good reliability in our sample. The Cancer Attitude Scale (CAS) is an Iranian validated questionnaire with 15 items assessing attitudes toward cancer. It has two domains, senses and beliefs (9 items) and worries (6 items). The items were rated on a five-point Likert-type scale, anchored at the extremes with 1 = completely agree to 5 = completely disagree . All items were scored in the direction of a negative attitude, with higher scores indicating more negative attitudes toward cancer and preventive behaviors. A minimum score is 15, and 75 is the maximum . The Cronbach’s alpha coefficient for the CAS was .84 in our sample. The Generalized Self-Efficacy Scale (GSE-10) is a 10-item scale developed by Schwarzer . This scale assesses self-efficacy based on subjects’ propensities that correlate to emotion, optimism and work satisfaction. It is a self-report measure of self-efficacy, rated on a four-point experience scale ranging from 1 = not at all true to 4 = exactly true . Total self-efficacy score is derived from all 10 items and ranges from 10 to 40, with higher scores indicating higher self-efficacy. This questionnaire has been confirmed to have good validity and reliability . The present study also found a Cronbach’s alpha of .76 for the total score. The Health Promoting Lifestyle-II (HPLP II) assesses individuals’ health-promoting behaviors based on Pender’s health promotion model. It is a 52-item instrument that yields a multidimensional profile of scores across six domains: nutrition (9 items), physical activity (8 items), interpersonal relations (9 items), stress management (8 items), health responsibility (9 items) and spiritual growth (9 items). In this study we have only used the stress management subscale of the instrument. The total score for the HPLP-II stress management subscale ranges from 8 to 32. A higher score indicates more stress management. Each item was estimated on a four point Likert-type measure, with 1 = never , 2 = sometimes , 3 = often , and 4 = always . The Cronbach’s alpha coefficient for the HPLP-II subscale was .70 in our sample. Statistical analysis Several statistical methods were applied to test the psychometric properties of the scale. These are presented as follows. Validity Construct validity After the item analysis, the 49 remaining items were used to estimate the construct validity using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Furthermore, both convergent validity and divergent validity were assessed. Exploratory factor analysis EFA was applied to specify the main factors of the questionnaire. We estimated the sample size a priori. As recommended by Gable and Wolf, a sample of five to ten women per item is necessary in order to ensure a conceptually clear factor structure for analysis . The desired minimum required sample size was thus determined to be 250 women. These women were recruited from the health centers (see data collection section). A principal component analysis (PCA) with varimax rotation was used to extract the main factors. The Kaiser-Meyer-Olkin (KMO) measure and Bartlett’s test of sphericity were applied to assess the adequacy of the sample for the factor analysis . Any factor with an eigenvalue above 1 was considered significant for factor extraction, and a scree plot was used to specify the number of factors. Factor loadings equal to or greater than .40 were considered acceptable . Confirmative factor analysis A confirmatory factor analysis was applied in order to assess the coherence between the data and the structure. Considering the possible attrition related to test-retest analysis, we planned to recruit a separate sample of 130 women from health centers affiliated to Tehran University of Medical Sciences. Assigning four individuals to each item, a sample size of 130 was estimated . The model fit was evaluated using multiple fit indices. As suggested, various fit indices measuring relative Chi-square, Goodness of Fit Index (GFI), Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA), Non-Normed Fit Index (NNFI), Normed Fit Index (NFI) and Standardized Root Mean Square Residual (SRMR) were taken into account . The GFI, CFI, NNFI and NF range between 0 and 1 , but values of 0.90 or above are commonly indicated as acceptable model fits . An RMSEA value between .08 and .10 demonstrates an average fit, and a value below .08 shows a good fit. Values below .05 indicate a good fit for SRMR, but values between .05 and .08, and between .08 and .10 indicate a close fit or are acceptable, respectively . Convergent & divergent validity To assess convergent and divergent validity, a new sample of 180 women aged 30 or above was recruited. Table provides the descriptive characteristics of the 180 women. Apart from the ASSISTS, the women also completed the Iranian validated versions of the MSPSS , CAS , GSE , and the stress management subscale of the HPLP-II . We first assessed the item-convergent validity by examining the correlations between the item scores and the subscale scores of the ASSISTS by use of the Spearman correlation coefficient. We expected that, for each subscale of the ASSISTS, the item scores of the subscale (e.g., self-care) would correlate more with the total score of the respective subscale (e.g., self-care), rather than the total score of other subscales (e.g., stress management). Correlation values between 0 and .20 are considered poor; between .21 and .40, fair; between .41 and .60, good; between 0.61 and 0.80, very good; and above .81, excellent. . Item-convergent validity exists when an item has a significantly higher correlation with its own scale compared with the other scales, and item divergent validity exists when an item has lower correlation with other scales . Then we evaluated convergent and divergent validity of four subscales of the ASSISTS (stress management, attitudes, supportive system and self-efficacy) compared to the abovementioned validated questionnaires. For three subscales of the ASSISTS (self-care, motivation and information seeking) we were unable to assess convergent validity due to the lack of suitable dimensions or Iranian validated scales. Convergent validity is established when a subscale of the ASSISTS correlates moderately with the validated questionnaire (correlation .21 or above). We expected moderate correlations between the stress management subscale of the ASSIST and the stress management subscale of the HPLP-II, between the attitude subscale of the ASSIST and the CAS, between the supportive system subscale of the ASSIST and the MSPSS, and between the self-efficacy subscale of the ASSISTS and the GSE-10. A poor correlation (.20 or lower) between a subscale of the ASSISTS and one of the validated questionnaires demonstrates divergent validity. Reliability Internal consistency Cronbach’s alpha coefficient was applied to assess the internal consistency of each item, the whole questionnaire and each dimensions of the ASSISTS questionnaire. The alpha values equal to .70 or higher were considered acceptable . Test-retest The test-retest reliability was applied to examine the questionnaire’s stability by estimating the intraclass correlation coefficient (ICC). The scale was re-administered to 25 women two weeks after the first completion. ICC values of .40 or above are considered acceptable . All statistical analyses, except confirmatory factor analysis, were performed using SPSS 18.0 . The confirmatory factory analysis was performed using LISREL 8.80 . This study was approved by the Ethics Committee of Tehran University of Medical Sciences [Grant number 22847] and all participants completed informed written consent. The study was conducted in two phases. In the first phase, we started by generating items and developing the instruments. A secondary analysis of previous qualitative data was done to provide an initial indication of candidate items, to generate relevant items, to evaluate face and content validity, and to determine the most appropriate phrasing. The second phase was a testing phase, involving cross-sectional studies with women. We carried out both exploratory factor analysis and confirmatory factor analysis, and tested the convergent and discriminant validity and the internal consistency of the scale. Thereafter, test-retest reliability was examined using an independent sample of 25 women. This study was carried out to develop a scale for measuring factors influencing women’s breast cancer prevention behaviors. Items were derived from secondary analysis from a previous qualitative research conducted by Khazaee-Pool in which Iranian women’s experiences about breast cancer preventive behaviors were explored . Based on the secondary analysis, by Graneheim method , five main themes and 29 subthemes were considered to be key factors relating to breast cancer preventive behaviors. The framework is provided in Table . The item pool contained 97 items at this point. The content of the items was made clear, and extra items were omitted through discussion. The main investigator and other researchers read items and removed extra items. Finally, the first draft of the scale was developed and consisted of 58 items. Each item was rated on a five-point response scale anchored at 1 = never to 5 = always . Thereafter, content and face validity were examined to develop the pre-final version of the instrument. Content validity Both qualitative and quantitative content validity were examined. In the qualitative stage, a scientific expert panel (i.e., a team of investigators specialized in health education, breast cancer and psychometrics) assessed the content validity of the scale. The expert panel evaluated the wording, grammar, item allocation and scaling of the scale. In the quantitative stage, both the content validity index (CVI) and the content validity ratio (CVR) were calculated. The clarity, simplicity and relevance of each item were measured by the CVI . In order to calculate the CVI, a Likert-type ordinal scale with four possible responses was applied. The answers were rated from 1 = not relevant, not simple and not clear to 4 = very relevant, very simple and very clear . The CVI was assessed as the proportion of items that received a rating of 3 or 4 by the experts . A CVI score below .80 for an item was not acceptable . The CVR tested the essentiality of the items. To assess the CVR, the expert panel scored each item as 1 = essential , 2 = useful but not essential , or 3 = not essential . Then, based on the Lawshe Table , items with a CVR score of 0.62 or above were considered to be acceptable and were retained. In the quantitative stage, items with a CVR and a CVI less than .62 and .80, respectively, were deleted. In total, 9 items were deleted, resulting in a 49-item pool. The expert panel also revised the instrument with regard to grammar, wording and item allocation. For example, the sentence “Breast cancer destroys my femininity” was changed to “If I get breast cancer, my feminine identity will be lost”. The 49-item pool remained in the analyses below and consisted of positively worded and negatively worded statements with five response options: 1 = never , 2 = rarely , 3 = sometimes , 4 = often , and 5 = always. Face validity Both qualitative and quantitative methods were used to assess face validity. A group of women ( n = 10) were asked to evaluate each item of the questionnaire and to indicate if they felt ambiguity or difficulty in replying to the Iranian version of the ASSISTS questionnaire. Based on the participants’ viewpoints, the ambiguous items were adapted. In a quantitative phase, the impact score (frequency × importance) was assessed to show the percentage of women who identified each item as important or quite important on a five-point Likert scale. Items were considered to be appropriate if they had an impact score equal to or more than 1.5 (which corresponds to a mean frequency of 50 % and a mean importance of three on the five-point Likert scale) . In conclusion, all items had an impact score higher than 1.5. The range of impact score was from 1.9 to 5. None of the items were omitted, and the first form of the questionnaire containing 49 items was established for the next phase of psychometric evaluation. In other words, the group of women indicated that they experienced no difficulties reading and understanding the 49 items. Both qualitative and quantitative content validity were examined. In the qualitative stage, a scientific expert panel (i.e., a team of investigators specialized in health education, breast cancer and psychometrics) assessed the content validity of the scale. The expert panel evaluated the wording, grammar, item allocation and scaling of the scale. In the quantitative stage, both the content validity index (CVI) and the content validity ratio (CVR) were calculated. The clarity, simplicity and relevance of each item were measured by the CVI . In order to calculate the CVI, a Likert-type ordinal scale with four possible responses was applied. The answers were rated from 1 = not relevant, not simple and not clear to 4 = very relevant, very simple and very clear . The CVI was assessed as the proportion of items that received a rating of 3 or 4 by the experts . A CVI score below .80 for an item was not acceptable . The CVR tested the essentiality of the items. To assess the CVR, the expert panel scored each item as 1 = essential , 2 = useful but not essential , or 3 = not essential . Then, based on the Lawshe Table , items with a CVR score of 0.62 or above were considered to be acceptable and were retained. In the quantitative stage, items with a CVR and a CVI less than .62 and .80, respectively, were deleted. In total, 9 items were deleted, resulting in a 49-item pool. The expert panel also revised the instrument with regard to grammar, wording and item allocation. For example, the sentence “Breast cancer destroys my femininity” was changed to “If I get breast cancer, my feminine identity will be lost”. The 49-item pool remained in the analyses below and consisted of positively worded and negatively worded statements with five response options: 1 = never , 2 = rarely , 3 = sometimes , 4 = often , and 5 = always. Both qualitative and quantitative methods were used to assess face validity. A group of women ( n = 10) were asked to evaluate each item of the questionnaire and to indicate if they felt ambiguity or difficulty in replying to the Iranian version of the ASSISTS questionnaire. Based on the participants’ viewpoints, the ambiguous items were adapted. In a quantitative phase, the impact score (frequency × importance) was assessed to show the percentage of women who identified each item as important or quite important on a five-point Likert scale. Items were considered to be appropriate if they had an impact score equal to or more than 1.5 (which corresponds to a mean frequency of 50 % and a mean importance of three on the five-point Likert scale) . In conclusion, all items had an impact score higher than 1.5. The range of impact score was from 1.9 to 5. None of the items were omitted, and the first form of the questionnaire containing 49 items was established for the next phase of psychometric evaluation. In other words, the group of women indicated that they experienced no difficulties reading and understanding the 49 items. The main study and the data collection In order to test the psychometric properties of the ASSISTS scale in a wider setting, a cross-sectional study was designed to be carried out in Tehran, Iran, from February 2012 to September 2014. A multistage cluster sampling was used. Firstly, Tehran (the capital of Iran) was separated into five areas: north, south, west, east and central. All health centers located in these five areas that were affiliated to the Tehran University of Medical Sciences were recognized. Then five health centers in each area were randomly chosen. Participants who visited health centers affiliated to Tehran University of Medical Sciences were entered into the study if they were 30 years old or older, literate and healthy (i.e., having no history of breast cancer) and wanted to take part in the study. After the first author conducted a short interview and provided information about the aim of the study, women who accepted to participate in the study completed the ASSISTS scale. Besides the study scale, the demographic characteristics of participants including employment status, educational level and marital status were also collected. In order to collect data, educated investigators performed face-to-face interviews. Measures To establish the validity of the ASSIST, we also administered the following scales from a group of women: The Multidimensional Scale of Perceived Social Support, the Cancer Attitude Scale, the Generalized Self-Efficacy Scale and the Stress Management Scale with regard to a health-promoting lifestyle. The Multidimensional Scale of Perceived Social Support (MSPSS) is a brief instrument developed to assess perceptions of support from three sources: family, friends and a significant other. The MSPSS comprises a total of 12 items, with four items for each of three subscales. Each item was valued on a seven-point Likert-type scale, ranging from 1 = very strongly disagree to 7 = very strongly agree . In several studies, the MSPSS has been presented to have good internal and test-retest reliability, good validity and a fairly stable factorial structure. It has been translated into many languages, including Farsi (Persian) . The minimum and maximum scores of the questionnaire are 12 and 84, respectively. A higher score indicates greater perceived social support. A score of 65 or less is considered the cutoff point for eligibility of services. The Cronbach’s alpha coefficient for the total scale was .81, indicating good reliability in our sample. The Cancer Attitude Scale (CAS) is an Iranian validated questionnaire with 15 items assessing attitudes toward cancer. It has two domains, senses and beliefs (9 items) and worries (6 items). The items were rated on a five-point Likert-type scale, anchored at the extremes with 1 = completely agree to 5 = completely disagree . All items were scored in the direction of a negative attitude, with higher scores indicating more negative attitudes toward cancer and preventive behaviors. A minimum score is 15, and 75 is the maximum . The Cronbach’s alpha coefficient for the CAS was .84 in our sample. The Generalized Self-Efficacy Scale (GSE-10) is a 10-item scale developed by Schwarzer . This scale assesses self-efficacy based on subjects’ propensities that correlate to emotion, optimism and work satisfaction. It is a self-report measure of self-efficacy, rated on a four-point experience scale ranging from 1 = not at all true to 4 = exactly true . Total self-efficacy score is derived from all 10 items and ranges from 10 to 40, with higher scores indicating higher self-efficacy. This questionnaire has been confirmed to have good validity and reliability . The present study also found a Cronbach’s alpha of .76 for the total score. The Health Promoting Lifestyle-II (HPLP II) assesses individuals’ health-promoting behaviors based on Pender’s health promotion model. It is a 52-item instrument that yields a multidimensional profile of scores across six domains: nutrition (9 items), physical activity (8 items), interpersonal relations (9 items), stress management (8 items), health responsibility (9 items) and spiritual growth (9 items). In this study we have only used the stress management subscale of the instrument. The total score for the HPLP-II stress management subscale ranges from 8 to 32. A higher score indicates more stress management. Each item was estimated on a four point Likert-type measure, with 1 = never , 2 = sometimes , 3 = often , and 4 = always . The Cronbach’s alpha coefficient for the HPLP-II subscale was .70 in our sample. Statistical analysis Several statistical methods were applied to test the psychometric properties of the scale. These are presented as follows. In order to test the psychometric properties of the ASSISTS scale in a wider setting, a cross-sectional study was designed to be carried out in Tehran, Iran, from February 2012 to September 2014. A multistage cluster sampling was used. Firstly, Tehran (the capital of Iran) was separated into five areas: north, south, west, east and central. All health centers located in these five areas that were affiliated to the Tehran University of Medical Sciences were recognized. Then five health centers in each area were randomly chosen. Participants who visited health centers affiliated to Tehran University of Medical Sciences were entered into the study if they were 30 years old or older, literate and healthy (i.e., having no history of breast cancer) and wanted to take part in the study. After the first author conducted a short interview and provided information about the aim of the study, women who accepted to participate in the study completed the ASSISTS scale. Besides the study scale, the demographic characteristics of participants including employment status, educational level and marital status were also collected. In order to collect data, educated investigators performed face-to-face interviews. To establish the validity of the ASSIST, we also administered the following scales from a group of women: The Multidimensional Scale of Perceived Social Support, the Cancer Attitude Scale, the Generalized Self-Efficacy Scale and the Stress Management Scale with regard to a health-promoting lifestyle. The Multidimensional Scale of Perceived Social Support (MSPSS) is a brief instrument developed to assess perceptions of support from three sources: family, friends and a significant other. The MSPSS comprises a total of 12 items, with four items for each of three subscales. Each item was valued on a seven-point Likert-type scale, ranging from 1 = very strongly disagree to 7 = very strongly agree . In several studies, the MSPSS has been presented to have good internal and test-retest reliability, good validity and a fairly stable factorial structure. It has been translated into many languages, including Farsi (Persian) . The minimum and maximum scores of the questionnaire are 12 and 84, respectively. A higher score indicates greater perceived social support. A score of 65 or less is considered the cutoff point for eligibility of services. The Cronbach’s alpha coefficient for the total scale was .81, indicating good reliability in our sample. The Cancer Attitude Scale (CAS) is an Iranian validated questionnaire with 15 items assessing attitudes toward cancer. It has two domains, senses and beliefs (9 items) and worries (6 items). The items were rated on a five-point Likert-type scale, anchored at the extremes with 1 = completely agree to 5 = completely disagree . All items were scored in the direction of a negative attitude, with higher scores indicating more negative attitudes toward cancer and preventive behaviors. A minimum score is 15, and 75 is the maximum . The Cronbach’s alpha coefficient for the CAS was .84 in our sample. The Generalized Self-Efficacy Scale (GSE-10) is a 10-item scale developed by Schwarzer . This scale assesses self-efficacy based on subjects’ propensities that correlate to emotion, optimism and work satisfaction. It is a self-report measure of self-efficacy, rated on a four-point experience scale ranging from 1 = not at all true to 4 = exactly true . Total self-efficacy score is derived from all 10 items and ranges from 10 to 40, with higher scores indicating higher self-efficacy. This questionnaire has been confirmed to have good validity and reliability . The present study also found a Cronbach’s alpha of .76 for the total score. The Health Promoting Lifestyle-II (HPLP II) assesses individuals’ health-promoting behaviors based on Pender’s health promotion model. It is a 52-item instrument that yields a multidimensional profile of scores across six domains: nutrition (9 items), physical activity (8 items), interpersonal relations (9 items), stress management (8 items), health responsibility (9 items) and spiritual growth (9 items). In this study we have only used the stress management subscale of the instrument. The total score for the HPLP-II stress management subscale ranges from 8 to 32. A higher score indicates more stress management. Each item was estimated on a four point Likert-type measure, with 1 = never , 2 = sometimes , 3 = often , and 4 = always . The Cronbach’s alpha coefficient for the HPLP-II subscale was .70 in our sample. Statistical analysis Several statistical methods were applied to test the psychometric properties of the scale. These are presented as follows. Several statistical methods were applied to test the psychometric properties of the scale. These are presented as follows. Construct validity After the item analysis, the 49 remaining items were used to estimate the construct validity using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Furthermore, both convergent validity and divergent validity were assessed. Exploratory factor analysis EFA was applied to specify the main factors of the questionnaire. We estimated the sample size a priori. As recommended by Gable and Wolf, a sample of five to ten women per item is necessary in order to ensure a conceptually clear factor structure for analysis . The desired minimum required sample size was thus determined to be 250 women. These women were recruited from the health centers (see data collection section). A principal component analysis (PCA) with varimax rotation was used to extract the main factors. The Kaiser-Meyer-Olkin (KMO) measure and Bartlett’s test of sphericity were applied to assess the adequacy of the sample for the factor analysis . Any factor with an eigenvalue above 1 was considered significant for factor extraction, and a scree plot was used to specify the number of factors. Factor loadings equal to or greater than .40 were considered acceptable . Confirmative factor analysis A confirmatory factor analysis was applied in order to assess the coherence between the data and the structure. Considering the possible attrition related to test-retest analysis, we planned to recruit a separate sample of 130 women from health centers affiliated to Tehran University of Medical Sciences. Assigning four individuals to each item, a sample size of 130 was estimated . The model fit was evaluated using multiple fit indices. As suggested, various fit indices measuring relative Chi-square, Goodness of Fit Index (GFI), Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA), Non-Normed Fit Index (NNFI), Normed Fit Index (NFI) and Standardized Root Mean Square Residual (SRMR) were taken into account . The GFI, CFI, NNFI and NF range between 0 and 1 , but values of 0.90 or above are commonly indicated as acceptable model fits . An RMSEA value between .08 and .10 demonstrates an average fit, and a value below .08 shows a good fit. Values below .05 indicate a good fit for SRMR, but values between .05 and .08, and between .08 and .10 indicate a close fit or are acceptable, respectively . Convergent & divergent validity To assess convergent and divergent validity, a new sample of 180 women aged 30 or above was recruited. Table provides the descriptive characteristics of the 180 women. Apart from the ASSISTS, the women also completed the Iranian validated versions of the MSPSS , CAS , GSE , and the stress management subscale of the HPLP-II . We first assessed the item-convergent validity by examining the correlations between the item scores and the subscale scores of the ASSISTS by use of the Spearman correlation coefficient. We expected that, for each subscale of the ASSISTS, the item scores of the subscale (e.g., self-care) would correlate more with the total score of the respective subscale (e.g., self-care), rather than the total score of other subscales (e.g., stress management). Correlation values between 0 and .20 are considered poor; between .21 and .40, fair; between .41 and .60, good; between 0.61 and 0.80, very good; and above .81, excellent. . Item-convergent validity exists when an item has a significantly higher correlation with its own scale compared with the other scales, and item divergent validity exists when an item has lower correlation with other scales . Then we evaluated convergent and divergent validity of four subscales of the ASSISTS (stress management, attitudes, supportive system and self-efficacy) compared to the abovementioned validated questionnaires. For three subscales of the ASSISTS (self-care, motivation and information seeking) we were unable to assess convergent validity due to the lack of suitable dimensions or Iranian validated scales. Convergent validity is established when a subscale of the ASSISTS correlates moderately with the validated questionnaire (correlation .21 or above). We expected moderate correlations between the stress management subscale of the ASSIST and the stress management subscale of the HPLP-II, between the attitude subscale of the ASSIST and the CAS, between the supportive system subscale of the ASSIST and the MSPSS, and between the self-efficacy subscale of the ASSISTS and the GSE-10. A poor correlation (.20 or lower) between a subscale of the ASSISTS and one of the validated questionnaires demonstrates divergent validity. After the item analysis, the 49 remaining items were used to estimate the construct validity using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA). Furthermore, both convergent validity and divergent validity were assessed. Exploratory factor analysis EFA was applied to specify the main factors of the questionnaire. We estimated the sample size a priori. As recommended by Gable and Wolf, a sample of five to ten women per item is necessary in order to ensure a conceptually clear factor structure for analysis . The desired minimum required sample size was thus determined to be 250 women. These women were recruited from the health centers (see data collection section). A principal component analysis (PCA) with varimax rotation was used to extract the main factors. The Kaiser-Meyer-Olkin (KMO) measure and Bartlett’s test of sphericity were applied to assess the adequacy of the sample for the factor analysis . Any factor with an eigenvalue above 1 was considered significant for factor extraction, and a scree plot was used to specify the number of factors. Factor loadings equal to or greater than .40 were considered acceptable . Confirmative factor analysis A confirmatory factor analysis was applied in order to assess the coherence between the data and the structure. Considering the possible attrition related to test-retest analysis, we planned to recruit a separate sample of 130 women from health centers affiliated to Tehran University of Medical Sciences. Assigning four individuals to each item, a sample size of 130 was estimated . The model fit was evaluated using multiple fit indices. As suggested, various fit indices measuring relative Chi-square, Goodness of Fit Index (GFI), Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA), Non-Normed Fit Index (NNFI), Normed Fit Index (NFI) and Standardized Root Mean Square Residual (SRMR) were taken into account . The GFI, CFI, NNFI and NF range between 0 and 1 , but values of 0.90 or above are commonly indicated as acceptable model fits . An RMSEA value between .08 and .10 demonstrates an average fit, and a value below .08 shows a good fit. Values below .05 indicate a good fit for SRMR, but values between .05 and .08, and between .08 and .10 indicate a close fit or are acceptable, respectively . Convergent & divergent validity To assess convergent and divergent validity, a new sample of 180 women aged 30 or above was recruited. Table provides the descriptive characteristics of the 180 women. Apart from the ASSISTS, the women also completed the Iranian validated versions of the MSPSS , CAS , GSE , and the stress management subscale of the HPLP-II . We first assessed the item-convergent validity by examining the correlations between the item scores and the subscale scores of the ASSISTS by use of the Spearman correlation coefficient. We expected that, for each subscale of the ASSISTS, the item scores of the subscale (e.g., self-care) would correlate more with the total score of the respective subscale (e.g., self-care), rather than the total score of other subscales (e.g., stress management). Correlation values between 0 and .20 are considered poor; between .21 and .40, fair; between .41 and .60, good; between 0.61 and 0.80, very good; and above .81, excellent. . Item-convergent validity exists when an item has a significantly higher correlation with its own scale compared with the other scales, and item divergent validity exists when an item has lower correlation with other scales . Then we evaluated convergent and divergent validity of four subscales of the ASSISTS (stress management, attitudes, supportive system and self-efficacy) compared to the abovementioned validated questionnaires. For three subscales of the ASSISTS (self-care, motivation and information seeking) we were unable to assess convergent validity due to the lack of suitable dimensions or Iranian validated scales. Convergent validity is established when a subscale of the ASSISTS correlates moderately with the validated questionnaire (correlation .21 or above). We expected moderate correlations between the stress management subscale of the ASSIST and the stress management subscale of the HPLP-II, between the attitude subscale of the ASSIST and the CAS, between the supportive system subscale of the ASSIST and the MSPSS, and between the self-efficacy subscale of the ASSISTS and the GSE-10. A poor correlation (.20 or lower) between a subscale of the ASSISTS and one of the validated questionnaires demonstrates divergent validity. EFA was applied to specify the main factors of the questionnaire. We estimated the sample size a priori. As recommended by Gable and Wolf, a sample of five to ten women per item is necessary in order to ensure a conceptually clear factor structure for analysis . The desired minimum required sample size was thus determined to be 250 women. These women were recruited from the health centers (see data collection section). A principal component analysis (PCA) with varimax rotation was used to extract the main factors. The Kaiser-Meyer-Olkin (KMO) measure and Bartlett’s test of sphericity were applied to assess the adequacy of the sample for the factor analysis . Any factor with an eigenvalue above 1 was considered significant for factor extraction, and a scree plot was used to specify the number of factors. Factor loadings equal to or greater than .40 were considered acceptable . A confirmatory factor analysis was applied in order to assess the coherence between the data and the structure. Considering the possible attrition related to test-retest analysis, we planned to recruit a separate sample of 130 women from health centers affiliated to Tehran University of Medical Sciences. Assigning four individuals to each item, a sample size of 130 was estimated . The model fit was evaluated using multiple fit indices. As suggested, various fit indices measuring relative Chi-square, Goodness of Fit Index (GFI), Comparative Fit Index (CFI), Root Mean Square Error of Approximation (RMSEA), Non-Normed Fit Index (NNFI), Normed Fit Index (NFI) and Standardized Root Mean Square Residual (SRMR) were taken into account . The GFI, CFI, NNFI and NF range between 0 and 1 , but values of 0.90 or above are commonly indicated as acceptable model fits . An RMSEA value between .08 and .10 demonstrates an average fit, and a value below .08 shows a good fit. Values below .05 indicate a good fit for SRMR, but values between .05 and .08, and between .08 and .10 indicate a close fit or are acceptable, respectively . To assess convergent and divergent validity, a new sample of 180 women aged 30 or above was recruited. Table provides the descriptive characteristics of the 180 women. Apart from the ASSISTS, the women also completed the Iranian validated versions of the MSPSS , CAS , GSE , and the stress management subscale of the HPLP-II . We first assessed the item-convergent validity by examining the correlations between the item scores and the subscale scores of the ASSISTS by use of the Spearman correlation coefficient. We expected that, for each subscale of the ASSISTS, the item scores of the subscale (e.g., self-care) would correlate more with the total score of the respective subscale (e.g., self-care), rather than the total score of other subscales (e.g., stress management). Correlation values between 0 and .20 are considered poor; between .21 and .40, fair; between .41 and .60, good; between 0.61 and 0.80, very good; and above .81, excellent. . Item-convergent validity exists when an item has a significantly higher correlation with its own scale compared with the other scales, and item divergent validity exists when an item has lower correlation with other scales . Then we evaluated convergent and divergent validity of four subscales of the ASSISTS (stress management, attitudes, supportive system and self-efficacy) compared to the abovementioned validated questionnaires. For three subscales of the ASSISTS (self-care, motivation and information seeking) we were unable to assess convergent validity due to the lack of suitable dimensions or Iranian validated scales. Convergent validity is established when a subscale of the ASSISTS correlates moderately with the validated questionnaire (correlation .21 or above). We expected moderate correlations between the stress management subscale of the ASSIST and the stress management subscale of the HPLP-II, between the attitude subscale of the ASSIST and the CAS, between the supportive system subscale of the ASSIST and the MSPSS, and between the self-efficacy subscale of the ASSISTS and the GSE-10. A poor correlation (.20 or lower) between a subscale of the ASSISTS and one of the validated questionnaires demonstrates divergent validity. Internal consistency Cronbach’s alpha coefficient was applied to assess the internal consistency of each item, the whole questionnaire and each dimensions of the ASSISTS questionnaire. The alpha values equal to .70 or higher were considered acceptable . Test-retest The test-retest reliability was applied to examine the questionnaire’s stability by estimating the intraclass correlation coefficient (ICC). The scale was re-administered to 25 women two weeks after the first completion. ICC values of .40 or above are considered acceptable . All statistical analyses, except confirmatory factor analysis, were performed using SPSS 18.0 . The confirmatory factory analysis was performed using LISREL 8.80 . Cronbach’s alpha coefficient was applied to assess the internal consistency of each item, the whole questionnaire and each dimensions of the ASSISTS questionnaire. The alpha values equal to .70 or higher were considered acceptable . The test-retest reliability was applied to examine the questionnaire’s stability by estimating the intraclass correlation coefficient (ICC). The scale was re-administered to 25 women two weeks after the first completion. ICC values of .40 or above are considered acceptable . All statistical analyses, except confirmatory factor analysis, were performed using SPSS 18.0 . The confirmatory factory analysis was performed using LISREL 8.80 . Construct validity Exploratory factor analysis The Kaiser-Meyer-Olkin measure was .733, and the Bartlett’s test of sphericity was significant ( χ 2 = 2180.98, p < .001), indicating adequacy of the sample for EFA. Initially, for the 49-item scale, 13 factors showed eigenvalues above 1.0, explaining the 66.34 % variance. However, the scree plot showed a 7-factor solution (Fig. ). This factor solution was explored by repeatedly assessing the item performance with elimination of the items in a step-by-step process. After eliminating the items with factor loadings below .40, we obtained a final factor solution that consisted of a 33-item questionnaire loading on seven distinct constructs. These constructs jointly accounted for 60.62 % of the observed variance. As shown in Table , seven factors were found: Factor 1 (supportive systems) included 5 items (items 10, 11, 12, 13 and 14), factor 2 (self-efficacy) included 3 items (item 7, 8 and 9), factor 3 (self-care) included 7 items (items 24, 25, 26, 27, 28, 29 and 30), factor 4 (stress management) included 3 items (items 31, 32 and 33), factor 5 (motivation) included 3 items (items 4, 5 and 6), factor 6 (information seeking) included 4 items (items 15, 16, 17 and 20) and factor 7 included 8 items (items 1, 2, 3, 18, 19, 21, 22 and 23). We refer to for the items of the ASSISTS. Confirmatory factor analysis We conducted a confirmatory factor analysis on the 33-item questionnaire to test the fitness of the model obtained from the EFA. Figure shows the best model fit. Covariance matrixes were used and fit indexes were calculated. All fit indices proved to be good. The relative chi-square ( χ 2/df) was equal to 1.86 ( p < .001). The RMSEA of the model was .031 (90 % CI = .021 – .089), and the SRMR was .030. All comparative indices of the model, including GFI, AGFI, CFI, NNFI and NFI, were more than .90 (.99, .98, .94, 1.00 and .98 respectively). Convergent-divergent and concurrent validity Table presents the item-convergent validity for the ASSISTS scale. As can be seen, all coefficients are higher than .20, and most of them are higher than 0.40. Self-care and self-efficacy had the lowest and the highest item-convergent validity, respectively (Table ). Convergent validity was assessed by the correlation between the different subscales of the ASSISTS and the MSPSS, the CAS, the GSE and the stress management subscale of the HPLP-II. The correlation between the stress management subscale of the ASSISTS and the HPLP-II was .65, which indicated that the convergent validity was very good. Likewise, the correlations between the attitudes, supportive systems and self-efficacy of the ASSISTS and the CAS, MSPSS and GSE, respectively, were between .42 and .45, indicating a good convergent validity. The other correlations were low (≤ .20), indicating that the divergent validity was good (Table ). Reliability To measure the reliability, the Cronbach’s alpha was calculated separately for the ASSISTS as well as for each factor of the ASSISTS. The Cronbach’s alpha coefficient for the ASSISTS was .80 and ranged from .79 to .85 for its subscales, which is well above the acceptable threshold, with the attitude subscale as an exception, with alpha = .69. Thus, no items of the instrument were omitted in this phase. In addition, test-retest analysis was conducted to test the stability of the instrument. The results indicated satisfactory results. Intraclass correlation (ICC) was .86 for the ASSISTS and ranged from .80 to .93 (good to excellent) for the subscales of the ASSISTS, lending support for the stability of the instrument, with the exception of the Attitude subscale, which had an ICC value slightly below the threshold (.79). The results are presented in Table . Exploratory factor analysis The Kaiser-Meyer-Olkin measure was .733, and the Bartlett’s test of sphericity was significant ( χ 2 = 2180.98, p < .001), indicating adequacy of the sample for EFA. Initially, for the 49-item scale, 13 factors showed eigenvalues above 1.0, explaining the 66.34 % variance. However, the scree plot showed a 7-factor solution (Fig. ). This factor solution was explored by repeatedly assessing the item performance with elimination of the items in a step-by-step process. After eliminating the items with factor loadings below .40, we obtained a final factor solution that consisted of a 33-item questionnaire loading on seven distinct constructs. These constructs jointly accounted for 60.62 % of the observed variance. As shown in Table , seven factors were found: Factor 1 (supportive systems) included 5 items (items 10, 11, 12, 13 and 14), factor 2 (self-efficacy) included 3 items (item 7, 8 and 9), factor 3 (self-care) included 7 items (items 24, 25, 26, 27, 28, 29 and 30), factor 4 (stress management) included 3 items (items 31, 32 and 33), factor 5 (motivation) included 3 items (items 4, 5 and 6), factor 6 (information seeking) included 4 items (items 15, 16, 17 and 20) and factor 7 included 8 items (items 1, 2, 3, 18, 19, 21, 22 and 23). We refer to for the items of the ASSISTS. Confirmatory factor analysis We conducted a confirmatory factor analysis on the 33-item questionnaire to test the fitness of the model obtained from the EFA. Figure shows the best model fit. Covariance matrixes were used and fit indexes were calculated. All fit indices proved to be good. The relative chi-square ( χ 2/df) was equal to 1.86 ( p < .001). The RMSEA of the model was .031 (90 % CI = .021 – .089), and the SRMR was .030. All comparative indices of the model, including GFI, AGFI, CFI, NNFI and NFI, were more than .90 (.99, .98, .94, 1.00 and .98 respectively). Convergent-divergent and concurrent validity Table presents the item-convergent validity for the ASSISTS scale. As can be seen, all coefficients are higher than .20, and most of them are higher than 0.40. Self-care and self-efficacy had the lowest and the highest item-convergent validity, respectively (Table ). Convergent validity was assessed by the correlation between the different subscales of the ASSISTS and the MSPSS, the CAS, the GSE and the stress management subscale of the HPLP-II. The correlation between the stress management subscale of the ASSISTS and the HPLP-II was .65, which indicated that the convergent validity was very good. Likewise, the correlations between the attitudes, supportive systems and self-efficacy of the ASSISTS and the CAS, MSPSS and GSE, respectively, were between .42 and .45, indicating a good convergent validity. The other correlations were low (≤ .20), indicating that the divergent validity was good (Table ). The Kaiser-Meyer-Olkin measure was .733, and the Bartlett’s test of sphericity was significant ( χ 2 = 2180.98, p < .001), indicating adequacy of the sample for EFA. Initially, for the 49-item scale, 13 factors showed eigenvalues above 1.0, explaining the 66.34 % variance. However, the scree plot showed a 7-factor solution (Fig. ). This factor solution was explored by repeatedly assessing the item performance with elimination of the items in a step-by-step process. After eliminating the items with factor loadings below .40, we obtained a final factor solution that consisted of a 33-item questionnaire loading on seven distinct constructs. These constructs jointly accounted for 60.62 % of the observed variance. As shown in Table , seven factors were found: Factor 1 (supportive systems) included 5 items (items 10, 11, 12, 13 and 14), factor 2 (self-efficacy) included 3 items (item 7, 8 and 9), factor 3 (self-care) included 7 items (items 24, 25, 26, 27, 28, 29 and 30), factor 4 (stress management) included 3 items (items 31, 32 and 33), factor 5 (motivation) included 3 items (items 4, 5 and 6), factor 6 (information seeking) included 4 items (items 15, 16, 17 and 20) and factor 7 included 8 items (items 1, 2, 3, 18, 19, 21, 22 and 23). We refer to for the items of the ASSISTS. We conducted a confirmatory factor analysis on the 33-item questionnaire to test the fitness of the model obtained from the EFA. Figure shows the best model fit. Covariance matrixes were used and fit indexes were calculated. All fit indices proved to be good. The relative chi-square ( χ 2/df) was equal to 1.86 ( p < .001). The RMSEA of the model was .031 (90 % CI = .021 – .089), and the SRMR was .030. All comparative indices of the model, including GFI, AGFI, CFI, NNFI and NFI, were more than .90 (.99, .98, .94, 1.00 and .98 respectively). Table presents the item-convergent validity for the ASSISTS scale. As can be seen, all coefficients are higher than .20, and most of them are higher than 0.40. Self-care and self-efficacy had the lowest and the highest item-convergent validity, respectively (Table ). Convergent validity was assessed by the correlation between the different subscales of the ASSISTS and the MSPSS, the CAS, the GSE and the stress management subscale of the HPLP-II. The correlation between the stress management subscale of the ASSISTS and the HPLP-II was .65, which indicated that the convergent validity was very good. Likewise, the correlations between the attitudes, supportive systems and self-efficacy of the ASSISTS and the CAS, MSPSS and GSE, respectively, were between .42 and .45, indicating a good convergent validity. The other correlations were low (≤ .20), indicating that the divergent validity was good (Table ). To measure the reliability, the Cronbach’s alpha was calculated separately for the ASSISTS as well as for each factor of the ASSISTS. The Cronbach’s alpha coefficient for the ASSISTS was .80 and ranged from .79 to .85 for its subscales, which is well above the acceptable threshold, with the attitude subscale as an exception, with alpha = .69. Thus, no items of the instrument were omitted in this phase. In addition, test-retest analysis was conducted to test the stability of the instrument. The results indicated satisfactory results. Intraclass correlation (ICC) was .86 for the ASSISTS and ranged from .80 to .93 (good to excellent) for the subscales of the ASSISTS, lending support for the stability of the instrument, with the exception of the Attitude subscale, which had an ICC value slightly below the threshold (.79). The results are presented in Table . In this study, we described the development and psychometric properties of a new instrument, called the ASSISTS, for assessing factors that affect women’s breast cancer prevention behaviors. This is the first study to provide a measure for evaluating the factors associated with breast cancer preventive behaviors in Iranian women. The content of the instrument items was initially developed based on a secondary analysis of previous qualitative data to ensure that this new instrument covered all theoretical concepts for breast cancer preventive behaviors. After exploratory factor analysis, a 7-domain instrument emerged. A confirmatory factor analysis revealed that the fit of the data was satisfactory. As such, the final 33-item ASSISTS instrument contained seven subscales (attitudes, support systems, self-efficacy, information seeking, stress management, self-care and motivation). Items included in the attitudes and stimulant subscales reflect conditions that might encourage women to experience breast cancer preventive behaviors. The attitudes subscale can help practitioners because it includes factors that impede or facilitate preventive behaviors, including issues related to a woman’s personal concerns. It is recognized that some factors, like knowledge, beliefs, attitudes, values and personal priorities, can motivate people to perform and modify their behavior . The self-care, stress management, information seeking and self-efficacy subscales include issues referring to personal skills, abilities, behaviors and habits that induce women to engage or not to engage in preventive behaviors. The information seeking behavior subscale reflects the way people search for and apply both active and passive information. More specifically, it refers to women’s practices for gaining health information via various sources, such as family, media, healthcare personnel and other means. When women are aware of the importance of preventive behaviors, they will have greater motivation to perform such behaviors. Modifying behaviors, especially lifestyle behaviors, requires long-term investments. Thus, it is unlikely for women to accept such behaviors out of habit without any conscious decision to do so. In addition, the stress management subscale covers a wide range of approaches aimed at controlling women’s levels of stress, commonly for the purpose of enhancing everyday activities. For instance, a number of self-help approaches to stress prevention have been developed in the health centers affiliated to our university, such as relaxation, Quran reading, praying, positive thinking and establishing sleep and rest time. Self-efficacy has a positive impact on health promoting behaviors and is associated with increasing breast cancer preventive behaviors, so self-efficacy is of great importance in the issue of behavioral change. It is important to know that women who had more positive expectations about breast cancer prevention felt more efficacious about practicing preventive behaviors in the face of barriers such as superstitious beliefs, prejudices, worries, feelings of giving up, sense of shame, lack of a health care facility, or things going wrong there. In other words, if one thinks he/she will get more benefit from behaving actively, this may be associated with better feelings of efficacy in the face of barriers, therefore increasing the chance of receiving the preventive behaviors. This is why it is discussed that preventive interventions must change women’s attitudes toward health and increase self-efficacy. Items of the supportive systems subscale refer to factors that may facilitate maintenance, repetition and fixing of preventive behaviors. Support may come from family members, peers, healthcare workers, decision-makers and insurance systems. It is well-known that reinforcing behavior from other persons facilitates continuation, repetition and stabilization of behavior . However, the focus of the present study was to develop a scale containing the most important factors related to breast cancer preventive behaviors, namely lifestyle behaviors and self-care. It can be argued that by addressing these activities in women, it is also important to address their unmet needs for social support . In the present study, we believe that women need instrumental, informational and emotional support to perform preventive behaviors, and thus we included all aspects of social support. For instance, women who receive support from different sources (e.g., family, friends) are more likely to participate in breast cancer prevention behaviors. However, taking into account the different aspects of social support, one direction for future studies might be to examine more thoroughly which aspects of support have to be included. Generally, the findings showed satisfactory psychometric properties for the scale. The CVI and the CVR showed that the content validity was reasonable. In addition, the results of the exploratory and confirmatory factor analyses showed a good structure for our new questionnaire. Exploratory factor analysis revealed that the seven-factor structure of the instrument accounted for 60.62 % of the total observed variance. It seems that a careful choice of items related to the scale might be the reason why we have achieved such satisfactory results. Furthermore, the CFA also showed good fit indices for the current model and the convergent validity of the subscales of the questionnaire was good, with the exception of the self-care subscale. With regard to the latter, all correlations between the items of the self-care subscale and its total score ranged between .21 and .36. Although these results are fair, the values are considerably lower than those of the other subscales. One explanation might be that the items of the self-care subscale all reflect different aspects of self-care (e.g., following an educational program, following a healthy diet, doing physical activities). The internal consistency of the final instrument as assessed by the Cronbach’s alpha coefficient was found to be .80, which reflected an acceptable reliability. In addition, the ICC score indicated an appropriate stability for the questionnaire, as it was examined by 25 women with a 2-week interval (.86). As such, we believe that this newly developed instrument may be especially helpful for healthcare teams to recognize and to plan preventive health strategies that are functional and targeted to specific conditions. The inclusion of seven domains in this instrument further allows health experts to understand how domains in need can be improved. Limitations Although the results of this study demonstrated several benefits, some limitations need to be considered. First, with regard to the sampling, we only interviewed women living in Tehran. As these women are culturally homogeneous, and their viewpoints cannot be generalized to the viewpoint of women living in other cultures. Therefore, it might be interesting for future studies to investigate the reliability and validity of the ASSISTS in a sample of women from different cultural backgrounds and regions. Second, regarding the sampling, the majority of the women in the present study were higher educated (54 %) or employed (66.6 %) women. In future studies, it would be necessary to examine the psychometric properties of the ASSISTS in women from both urban and rural areas with different levels of education and economic status. Third, this study used a minimal criteria sample design to validate the ASSISTS scale. It has to be seen in future studies with a larger sample whether the present results will still hold. Fourth, another limitation of the study is that we used two different samples for our exploratory and confirmatory factor analyses. Although the same procedure was used to collect the data from the women, some background information of the samples was not the same, particularly employment status and education level. This might have impacted the results of our study. In summary, one of the goals for the century is preventing and controlling chronic diseases such as cancer . To do so, we developed the ASSISTS, which proved to have satisfying psychometric properties. The ASSISTS assesses factors affecting breast cancer preventive behaviors that help to promote women’s health. Although the results of this study demonstrated several benefits, some limitations need to be considered. First, with regard to the sampling, we only interviewed women living in Tehran. As these women are culturally homogeneous, and their viewpoints cannot be generalized to the viewpoint of women living in other cultures. Therefore, it might be interesting for future studies to investigate the reliability and validity of the ASSISTS in a sample of women from different cultural backgrounds and regions. Second, regarding the sampling, the majority of the women in the present study were higher educated (54 %) or employed (66.6 %) women. In future studies, it would be necessary to examine the psychometric properties of the ASSISTS in women from both urban and rural areas with different levels of education and economic status. Third, this study used a minimal criteria sample design to validate the ASSISTS scale. It has to be seen in future studies with a larger sample whether the present results will still hold. Fourth, another limitation of the study is that we used two different samples for our exploratory and confirmatory factor analyses. Although the same procedure was used to collect the data from the women, some background information of the samples was not the same, particularly employment status and education level. This might have impacted the results of our study. In summary, one of the goals for the century is preventing and controlling chronic diseases such as cancer . To do so, we developed the ASSISTS, which proved to have satisfying psychometric properties. The ASSISTS assesses factors affecting breast cancer preventive behaviors that help to promote women’s health. Generally, the study findings suggest that the ASSISTS is a valid and reliable questionnaire to assess factors affecting women’s breast cancer prevention behaviors. Further studies in different populations are recommended to establish stronger psychometric properties for the instrument. |
Identification and Characterization of Metastasis‐Initiating Cells in ESCC in a Multi‐Timepoint Pulmonary Metastasis Mouse Model | 6f847bae-25c6-4876-a375-51ce08f5be85 | 11321633 | Anatomy[mh] | Introduction Esophageal squamous cell carcinoma (ESCC) is the major subtype of esophageal carcinoma with high mortality rate due to frequent relapse and metastasis. Five‐year survival rate of metastatic patients is less than 5%. Lung is the most common distant metastatic organ (11.4%) in ESCC, with different characteristics from other metastatic sites. Cancer heterogeneity has been evinced in a variety of solid tumors including ESCC. and acts as a cancer hallmark indicating the importance of genome instability in tumor progression. Intertumor heterogeneity has been widely studied in different ESCC patients to identify common genetic alterations that promote tumor progression, while intratumor heterogeneity in identifying subpopulations within ESCC tumors is an emerging field of research in cancer metastasis. It is believed that cancer metastasis originates from a small tumor subpopulation with unique characteristics, which refers to intratumor heterogeneity in gene expression profiles, and involves multiple simultaneous or sequential steps, including dissemination, colonization, and dormancy, required for migration and invasion into and out of circulations and extracellular matrix (ECM), anchorage, proliferation of tumor cells, resistance to various stresses, as well as to establish a metastasis favorable niche. This small subpopulation of tumor cells is called metastasis‐initiating cells (MICs) because of its ability to form distant organ metastases. Therefore, identification and characterization of MICs are crucial for understanding mechanism of tumor metastasis as well as for early detection and intervention to reduce metastasis‐related mortality. However, identification of MICs in ESCC is inadequate, especially at early metastatic stage. Recently, we established a mouse lung metastasis model and isolated metastatic tumor cells from lungs at multiple timepoints for single‐cell RNA‐sequencing (scRNA‐seq) analysis to identify potential MICs with enhanced survival and metastatic properties, together with their representative metastasis‐initiating signatures (MIS) as biomarkers. Then, in silico, in vitro and in vivo experiments were performed using flow sorted MIS‐enriched subpopulations to investigate cell survival (resistances to oxidative stress and apoptosis), cell migration, invasion, stemness, and in vivo lung metastasis capabilities. Finally, clinical relevance of MISs was tested as predictive biomarkers for patient outcomes by multiplex immunohistochemistry (mIHC) staining and statistical analyses based on co‐expression patterns of MISs in cell lines, metastatic mice lungs, and ESCC tissue microarray (TMA). Results 2.1 Establishment of a Pulmonary Metastatic Mouse Model with Multi‐Time Tracking To investigate early‐stage metastatic microenvironment of ESCC in vivo, we utilized luciferase and green fluorescent protein labeled ESCC cell line (KYSE30‐Luc‐GFP) to establish a pulmonary metastatic mouse model with multi‐time tracking. KYSE30‐Luc‐GFP cells (1.2 × 10 6 ) were injected into NOD SCID mice via tail vein, and bioluminescence signals in lungs were monitored longitudinally at multiple indicated timepoints (2 h, 6 h, 24 h, 48 h, 1 week, 2 months and 4 months). This mouse model exhibited dynamic in vivo bioluminescence signal expression along the timeline, with a gradual decrease in signal observed after inoculation, eventually becoming undetectable at 1 month and reappeared at 4 months with visible metastasis in resected lungs ( Figure ). 2.2 Detection of Dynamic Viable Metastatic Tumor Cells in Mouse Model To quantify surviving tumor cells in lungs injected intravenously, viable tumor cells in lungs at different timepoints were detected by immunohistochemistry (IHC) staining and counted by flow cytometry. IHC staining of human pan‐Cytokeratin (pan‐CK) was performed to count survival metastatic colonies in mouse lungs at seven timepoints included 2 h, 6 h, 24 h, 48 h, 1 week, 2 months, and 4 months (Figure ). Based on tumor cell numbers, metastatic colonies were divided into single cell, small (2‐10 cells), medium (11‐200 cells), and large (>200 cells) metastatic colonies (Figure ). The results showed tumor cells initially seeded in the lungs as single cells or small colonies within 6 h, and then number of colonies decreased and most survivors proliferated as small colonies at 24 h. Most of the seeded single cells and small colonies died within 48 h and maintained low proliferative capacity through week 1. Tumor colonies were further reduced at 2 months, mostly medium colonies, and visible metastatic nodules were observed at 4 months. We therefore hypothesize that there are two major crises for early metastatic survival: overcoming stresses and establishing metastasis favoring niches. First, most circulating tumor cells (CTCs) die due to their inability to withstand various stresses on their way to distant organs, such as loss of anchorage in circulation (i.e., anoikis), oxidative stresses, and immune attack. Second, tumor cells need to establish a special early metastasis microenvironment to help them implant into lung tissue and form metastatic colonies, while preventing immune attacks and obtaining nutrients. Next, flow cytometry was performed to calculate number of GFP‐positive living tumor cells in the lungs of each mouse at 6 h, 48 h, 2 months, and 4 months (Figure ). Results showed tumor cells bottomed out at 48 h, gradually rebounded at 2 months, and rocketed exponentially at 4 months, confirming the existence of a temporary survival crisis on day 2, and increased cell proliferation in the micrometastasis state at 2 months. It is worth noting that multi‐timepoint IHC series provides a more accurate picture of the dynamic survival tumor cells in the lungs compared with flow cytometry analysis. In summary, multi‐timepoint IHC series and flow cytometry analysis found that most CTCs survived for a few days in the lungs, and only a small proportion of CTCs that overcome the early metastatic survival crisis are able to maintain latency and eventually form metastatic nodules. 2.3 Identification of Metastasis‐Initiating Cells (MICs) To characterize tumor cells at different metastatic stages, GFP positive living tumor cells were flow‐sorted from freshly dissociated mouse lungs at four timepoints (6 h, 48 h, 2 months, and 4 months). By adding parental cells (0 h), 10× scRNA‐seq was performed to study expression profiles of five timepoints. After quality control, 19 986 tumor cells were remained for subsequent bioinformatics analysis ( Figure , right). We integrated the five timepoints and corrected for batch effect (Figure ). Principal component analysis (PCA) was used for dimensionality reduction, overall (Figure ) and each timepoint (Figure ) single‐cell spatial distributions were displayed. Notably, three major spatial distribution patterns related to the metastatic stage were shown at five timepoints, including parental stage at 0 h, early metastatic stage at 6 and 48 h, and late metastatic stages at 2 and 4 months. Parental stage cells were mostly distributed in the left half and far lower right corner; early invasion stage cells were scattered in the middle parts and spanning from left to right; and late metastasis stage cells were mostly distributed in the right part of plots. To check for heterogeneity within timepoints, we next performed clustering classifications within each timepoint (Figure ). Interestingly, we found a distinct subpopulation, Parental Cluster 1 (Cluster of Survival: Cluster S), which separated from the rest of Clusters at parental stage (Parental Cluster 2–11). Expression profiling analysis found that Cluster S is not comparable to other clusters at other time points. Next, we performed integrated cluster classification of all cells collected from five timepoints and found a total of 12 integrated clusters (Cluster 0–11) (Figure ). By collating, Cluster S corresponds to cells from integrated Clusters 1, 2, 5, 6, and 8. We further investigate the integration clusters and discovered dynamic expression pattern along time axis, integrated Clusters 1, 2, 5, 7, 8, and 11 were enriched along the establishment of metastatic colonies (Figure ), suggesting Cluster S cells were largely enriched in metastasis subpopulations. These results showed promising evidence that Cluster S cells might favor early survival of metastatic cells to facilitate metastasis initiation, possibly MICs. 2.4 Cluster S Represents MICs with Enhanced Metastasizing Features To visualize dynamic change of Cluster S fraction across timepoints, we classified cells into Cluster S and non‐Cluster S ( Figure ). Most of Cluster S cells (blue) distributed on the right half of UMAP plot and dramatically enriched upon early invasion. Percentage analyses revealed that Cluster S enriched from 14.5% at parental state to more than half population through metastasis with increasing trend (55.1% at 6 h, 84.1% in 4 months, Figure ), suggesting that Cluster S cells may be MICs due to their enhanced survival and metastasis capacities. To evaluate metastatic properties of Cluster S cells, top 100 differentially upregulated genes ( p < 0.05, log 2 fold‐change >1) of Cluster S were selected for Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis for overall and individual timepoints (Figure ; Figure , Supporting Information). Results showed that Cluster S from overall integration enhanced cell differentiation and adhesion, increased response to lipid and activated innate immune responses, and upregulating pathways related to cancer immunity (IL‐17, estrogen, and CAMs signaling), tumor metastasis (p53, apelin, and hippo signaling), and angiogenesis (apelin signaling, Figure ). Cluster S at parental stage has enhanced locomotion, cell adhesion, cell motility, and wound healing, as well as signaling pathways related to tumor migration (PI3K‐Akt, ErbB, and p53 signaling), interaction (ECM‐receptor interaction, CAMs signaling), and immunity (leukocyte transendothelial migration; Figure ). In summary, we found that Cluster S possesses an integral function in cancer cell migration and cancer‐related immunity, with stage‐specific functions at different timepoints. 2.5 Identification of the Potential MIC Signature Genes To further characterize metastatic properties of Cluster S, we investigated differentially upregulated genes in Cluster S and identified 7 representative genes as potential metastasis‐initiating signature (MIS), including CD44 , CST6 , C19orf33 , TACSTD2 , S100A14 , RHOD , and TM4SF1 (see selection criteria in Method). We next examined dynamic population expression patterns of these MIS at each timepoints. Results showed that MIS‐positive cells for all 7 signature genes were mainly detected in Cluster S of parental population (averagely 14%) and were significantly enriched in invasion stages (Figure ). We next confirmed that MISs were highly expressed in Cluster S subpopulation at parental stage (Figure ). Individual MIS genes showed similar spatial distribution patterns in both ensemble (Figure ) and parental stages (Figure ), with exception of CST6 in the ensemble. Since MISs are highly representative of the metastasis initiating Cluster S subpopulation, we hypothesize that expression of these genes is important for maintaining early survival of MICs and facilitating the establishment of pro‐metastasis TME. 2.6 MISs 1+1 Signature Filtering Defines MICs at Transcriptional Level To identify MICs by less stringent simultaneous co‐expression of MISs, MISs 1+1 filter was used to define MICs. The co‐expression of any 1 gene from Group A (C19orf33, S100A14, and RHOD; count ≥1) and any 1 gene from Group B (CST6, CD44, TM4SF1, and TACSTD2; count ≥1) in a single tumor cell was defined as a MIC (Figure , Supporting Information box). Evaluation of MISs 1+1 filter efficiency in parental population demonstrated satisfactory positive predictive value (81.5%), negative predictive value (99.4%), sensitivity (99.5%), specificity (79.9%), and accuracy (89.0%) representing Cluster S subpopulation (Figure , Supporting Information). Comparison of the spatial distribution patterns of MISs 1+1 positive cells with any MISs positive cells (Figure , Supporting Information) and Cluster S subpopulation (Figure ) at all timepoints verified that similar spatial expression patterns of MISs 1+1 positive cells were highly representative of Cluster S at parental state and those surviving cells at early metastatic states. 2.7 MIS‐Enriched (CD44 high ) Cells Possess Metastasis‐Initiating Properties To validate potential pro‐surviving and metastasis‐enhancing properties of MICs, MIS‐enriched cells were flow‐sorted by anti‐CD44 antibody (Figure , Supporting Information). ScRNA‐seq data from the mice model verified that high population of CD44‐enriched cells is simultaneously co‐expressing other signatures such as C19orf33, TACSTD2, S100A14, RHOD and TM4SF1 in cell cultures and across different metastasis stages and Cluster groups (Figure , Supporting Information). Bulk RNA sequencing was applied to compare expression profiles between CD44 high and CD44 low cells (Table , Supporting Information) and 113 differentially upregulated genes were identified in CD44 high cells ( Figure ). GO and KEGG analyses of top 100 upregulated genes revealed CD44 high cells are enriched in cell migration, organ development, stress responses, neuron development, and lipid metabolism (Figure ). These results are consistent with our predictions about the metastatic properties and TME remodeling capacities of MICs, suggesting a potential enhancement of survival and metastatic behavior of MICs. To test our prediction about metastatic behaviors of MICs, we performed a series of in vitro functional assays on CD44‐enriched and TACSTD2‐enriched subpopulations. Comparing with CD44 low cells, CD44 high cells exhibited significantly enhanced abilities in wound healing ( p < 0.05), cell migration ( p < 0.01), cell invasion ( p < 0.001), foci formation (anchorage‐dependent, p < 0.05), colony formation in soft agar (anchorage‐independent, p < 0.05), and spheroid formation (self‐renewal, p < 0.05) (Figure ). We also found that CD44 high cells have better anti‐apoptotic ability ( p < 0.05) despite the increased production of reactive oxidative species (ROS) after oxidative stress stimulation ( p < 0.05) (Figure ). Western blot analysis of epithelial (β‐catenin, ZO‐1, and E‐cadherin) and mesenchymal (vimentin, slug, and snail) markers elucidated the quasi‐epithelial‐mesenchymal (quasi‐EM) state of CD44 high cells, having expression of both epithelial and mesenchymal markers simultaneously (Figure ). These results confirm the duality of enhanced migration and invasion abilities as well as anchorage‐dependent proliferation and self‐renewal ability of CD44 high cells, suggesting that MICs may possess high plasticity in epithelial‐mesenchymal transition (EMT) to adapt to dynamic TME upon stimulation. Quantitative PCR (qPCR) was used to compare CD44 high and CD44 low cells to detect expression levels of other three MIS signature markers ( S100A14 , TM4SF1 , and CST6 ) in CD44 high cells. Compared with CD44 low cells, results showed that all three markers were significantly upregulated in CD44 high cells (Figure ), confirming the co‐expression of MISs in MICS at RNA level. In addition, we evaluated functional behaviors of TACSTD2‐enriched cells. Results demonstrated that TACSTD2 high cells significantly enhanced wound healing ( p < 0.01), migration ( p < 0.001), invasion ( p < 0.001), foci formation ( p < 0.05) abilities (Figure , Supporting Information). All these results demonstrated an overall enhanced metastatic and proliferative capacity of the MIS‐enriched subpopulations. 2.8 MICs Switch from Partial‐Epithelial State to Quasi‐Epithelial‐Mesenchymal State During Early Metastasis Establishment To address EMT state of MICs during metastasis progress, we evaluated scRNA expression of epithelial marker (CTNNB1, gene for β‐catenin) and mesenchymal marker (SNAI1, gene for Snail) across timepoints (Figure , Supporting Information). We defined epithelial (E) state by over 50% population positive (Count>0) of CTNNB1, mesenchymal (M) state by over 10% population positive (Count>0) of SNAI1, and quasi‐epithelial‐mesenchymal (quasi‐EM) state for acquiring both E and M simultaneously based on our data. Our results show that there was a dynamic change of EMT state for cells, from partial‐epithelial (partial‐E) states in parental Cluster S and E state in parental non‐Cluster S to M state at 6 h, quasi‐EM state at 48 h, and returned to E state at 2 months and 4 months. This demonstrated the plasticity of MICs in switching EMT phenotypes upon stimulation by the surrounding environment to adapt and facilitate survival and metastasis colony formation. 2.9 MIS‐Enriched Subpopulations have Early‐Survival and Metastatic Colonization Properties To evaluate in vivo metastasis‐initiating properties of MICs, we established a mouse pulmonary metastasis model by intravenously inoculating CD44 or TACSTD2 flow‐sorted MIS‐enriched cells into NOD SCID mice. Bioluminescence signals were monitored from 2 h to 2 months in mice injected with CD44 high , CD44 low , and parental KYSE30‐Luc‐GFP. Results found that CD44 high group showed the strongest signals in vivo and in freshly resected lungs at 40 h and 2 months compared with CD44 low and control groups ( Figure ), suggesting that an enhanced abilities of MIS‐enriched cells to survive in new niche (40 h) and establish colonization at later stage (2 months). However, a stronger signal only represents more metastatic cells surviving but cannot account for the number and size of colonies formed. Thus, we performed IHC staining to estimate the number and size of colonies formed and flow cytometry analysis to quantify the number of viable cells. IHC staining showed CD44 high group formed larger (Figure ) and more ( p < 0.05, Figure ) colonies than CD44 low and control groups at 2 months, and accelerated formation of visible metastatic nodules (>200 cells) in lungs at 2 months (Figure ). Flow cytometry analysis of GFP intensity showed that CD44 high group retained more viable tumor cells at both early (40 h, p < 0.05) and late metastatic (2 months, p < 0.05) stages compared with CD44 low group (Figure ), again echoing the large size nodules we observed in the IHC analysis. We next investigated the TACSTD2‐enriched subpopulation. Signature co‐expression analyses in scRNA‐seq data validated TACSTD2‐enriched cells have similar expression pattern as CD44‐enriched cells in culture and across different metastasis stages and Cluster groups. (Figure , Supporting Information). Similar as CD44 high group, TACSTD2 high group retained stronger bioluminescence signals in lungs (Figure , Supporting Information) and formed larger sized colonies (Figure , Supporting Information) at 2 months, compared with MIS low groups, though the difference in number of tumor sites formed was not statistically significant (Figure , Supporting Information). Taken together, MIS‐enriched subpopulations possess enhanced in vivo metastatic capacities for early survival and late colonization in lungs. 2.10 Detection of MICs using MIS Markers by Multiplex IHC (mIHC) To test the potential of MISs in MIC detection, mIHC staining was used to detect the co‐expression patterns of 4 selected MISs (CD44, S100A14, RHOD, and TACSTD2) in four ESCC cell lines (KYSE30, KYSE180, KYSE410, and KYSE520). It was found that among four cell lines, ≈10–25% and 20–40% of tumor cells co‐expressed 4 and 3 MISs, respectively (Figure , Supporting Information), indicating the prevalence of MIC subpopulation in ESCC. We next introduced mIHC staining into a pulmonary metastasis mouse model to investigate expression pattern of MISs in mouse lungs along the metastatic timeline and elucidate its correlation with tumor cell survival and metastasis in vivo ( Figure ). We found that most single cells did not express MISs 6 h after inoculation, only a few subpopulations displayed co‐expression, which was similar to parental cells. Interestingly, most cells without MIS expression died of survival crisis within 2 days, whereas most surviving colonies co‐expressed MISs at 2 days and 1 week (Figure ). More MISs co‐expressing tumor cells propagated in metastatic colonies at 2 and 4 months forming larger nodules, suggesting that MISs co‐expression is essential for overcoming early survival crisis and subsequent establishment of metastatic colonization. 2.11 MISs Co‐Expression Scores Predict Patient Outcomes We next examined MIS expression in clinical samples and evaluated the applicability of MIS co‐expression scores in predicting the prognosis of ESCC patients. mIHC staining using 4 selected MISs with high quality antibodies was performed in an ESCC tissue microarray (TMA) containing 244 cases with primary tumors (PT), 57 of which has paired lymph node metastasis (LNM) ( Figure ; Tables and , Supporting Information). We first calculated individual signature scores, co‐expressing signature scores for PT and paired LNM nodules, and differential individual and co‐expressing signature scores for each patient comparing PT and LNM scores. We then performed statistical analyses of these scores and corresponding patient outcomes. High scores for single TACSTD2 expression ( p < 0.01; p = 0.039) and S100A14‐TACTSD2 co‐expression ( p < 0.05; p = 0.047) in primary tumors were significantly associated with both presence of LNM (N1 staging) and poorer overall survival (OS) in patients (Figure , Supporting Information), respectively. Comparing LNM with PT, high differential scores of single RHOD expression (R = 0.378, p = 0.006; p = 0.015), single TACSTD2 expression (R = 0.356, p = 0.010; p = 0.029), and S100A14‐TACSTD2 co‐expression (R = 0.353, p = 0.011; p = 0.03) were positively associated with increased LNM ratio (positive LN ratio) and poorer OS (Figure , Supporting Information), respectively. These results demonstrate that the value of individual or dual signature expressions in predicting LNM and OS, although only a portion MISs panel was used. Therefore, we next formulated a differential MISs score (dMISs) by comparing LNM scores with PT scores, performing co‐expression score calculations using all four MISs, and then analyzed correlations with patient outcomes. We calculated dMISs for each patient and performed Cox regression to classify patients into dMISs high and dMISs low groups (for example, Figure ). We found that patients with higher dMISs correlates to poorer OS ( p = 0.023), increased LNM ratio (positive LN ratio; R = 0.363, p = 0.009), higher grade of poorer tumor differentiation ( p = 0.039), and increased chances to develop carcinothrombosis ( p < 0.01; Figure ). These findings suggest increased co‐expression of MISs in lymph node tissues results in higher dMISs compared with PT and predicts worse patient prognosis. Establishment of a Pulmonary Metastatic Mouse Model with Multi‐Time Tracking To investigate early‐stage metastatic microenvironment of ESCC in vivo, we utilized luciferase and green fluorescent protein labeled ESCC cell line (KYSE30‐Luc‐GFP) to establish a pulmonary metastatic mouse model with multi‐time tracking. KYSE30‐Luc‐GFP cells (1.2 × 10 6 ) were injected into NOD SCID mice via tail vein, and bioluminescence signals in lungs were monitored longitudinally at multiple indicated timepoints (2 h, 6 h, 24 h, 48 h, 1 week, 2 months and 4 months). This mouse model exhibited dynamic in vivo bioluminescence signal expression along the timeline, with a gradual decrease in signal observed after inoculation, eventually becoming undetectable at 1 month and reappeared at 4 months with visible metastasis in resected lungs ( Figure ). Detection of Dynamic Viable Metastatic Tumor Cells in Mouse Model To quantify surviving tumor cells in lungs injected intravenously, viable tumor cells in lungs at different timepoints were detected by immunohistochemistry (IHC) staining and counted by flow cytometry. IHC staining of human pan‐Cytokeratin (pan‐CK) was performed to count survival metastatic colonies in mouse lungs at seven timepoints included 2 h, 6 h, 24 h, 48 h, 1 week, 2 months, and 4 months (Figure ). Based on tumor cell numbers, metastatic colonies were divided into single cell, small (2‐10 cells), medium (11‐200 cells), and large (>200 cells) metastatic colonies (Figure ). The results showed tumor cells initially seeded in the lungs as single cells or small colonies within 6 h, and then number of colonies decreased and most survivors proliferated as small colonies at 24 h. Most of the seeded single cells and small colonies died within 48 h and maintained low proliferative capacity through week 1. Tumor colonies were further reduced at 2 months, mostly medium colonies, and visible metastatic nodules were observed at 4 months. We therefore hypothesize that there are two major crises for early metastatic survival: overcoming stresses and establishing metastasis favoring niches. First, most circulating tumor cells (CTCs) die due to their inability to withstand various stresses on their way to distant organs, such as loss of anchorage in circulation (i.e., anoikis), oxidative stresses, and immune attack. Second, tumor cells need to establish a special early metastasis microenvironment to help them implant into lung tissue and form metastatic colonies, while preventing immune attacks and obtaining nutrients. Next, flow cytometry was performed to calculate number of GFP‐positive living tumor cells in the lungs of each mouse at 6 h, 48 h, 2 months, and 4 months (Figure ). Results showed tumor cells bottomed out at 48 h, gradually rebounded at 2 months, and rocketed exponentially at 4 months, confirming the existence of a temporary survival crisis on day 2, and increased cell proliferation in the micrometastasis state at 2 months. It is worth noting that multi‐timepoint IHC series provides a more accurate picture of the dynamic survival tumor cells in the lungs compared with flow cytometry analysis. In summary, multi‐timepoint IHC series and flow cytometry analysis found that most CTCs survived for a few days in the lungs, and only a small proportion of CTCs that overcome the early metastatic survival crisis are able to maintain latency and eventually form metastatic nodules. Identification of Metastasis‐Initiating Cells (MICs) To characterize tumor cells at different metastatic stages, GFP positive living tumor cells were flow‐sorted from freshly dissociated mouse lungs at four timepoints (6 h, 48 h, 2 months, and 4 months). By adding parental cells (0 h), 10× scRNA‐seq was performed to study expression profiles of five timepoints. After quality control, 19 986 tumor cells were remained for subsequent bioinformatics analysis ( Figure , right). We integrated the five timepoints and corrected for batch effect (Figure ). Principal component analysis (PCA) was used for dimensionality reduction, overall (Figure ) and each timepoint (Figure ) single‐cell spatial distributions were displayed. Notably, three major spatial distribution patterns related to the metastatic stage were shown at five timepoints, including parental stage at 0 h, early metastatic stage at 6 and 48 h, and late metastatic stages at 2 and 4 months. Parental stage cells were mostly distributed in the left half and far lower right corner; early invasion stage cells were scattered in the middle parts and spanning from left to right; and late metastasis stage cells were mostly distributed in the right part of plots. To check for heterogeneity within timepoints, we next performed clustering classifications within each timepoint (Figure ). Interestingly, we found a distinct subpopulation, Parental Cluster 1 (Cluster of Survival: Cluster S), which separated from the rest of Clusters at parental stage (Parental Cluster 2–11). Expression profiling analysis found that Cluster S is not comparable to other clusters at other time points. Next, we performed integrated cluster classification of all cells collected from five timepoints and found a total of 12 integrated clusters (Cluster 0–11) (Figure ). By collating, Cluster S corresponds to cells from integrated Clusters 1, 2, 5, 6, and 8. We further investigate the integration clusters and discovered dynamic expression pattern along time axis, integrated Clusters 1, 2, 5, 7, 8, and 11 were enriched along the establishment of metastatic colonies (Figure ), suggesting Cluster S cells were largely enriched in metastasis subpopulations. These results showed promising evidence that Cluster S cells might favor early survival of metastatic cells to facilitate metastasis initiation, possibly MICs. Cluster S Represents MICs with Enhanced Metastasizing Features To visualize dynamic change of Cluster S fraction across timepoints, we classified cells into Cluster S and non‐Cluster S ( Figure ). Most of Cluster S cells (blue) distributed on the right half of UMAP plot and dramatically enriched upon early invasion. Percentage analyses revealed that Cluster S enriched from 14.5% at parental state to more than half population through metastasis with increasing trend (55.1% at 6 h, 84.1% in 4 months, Figure ), suggesting that Cluster S cells may be MICs due to their enhanced survival and metastasis capacities. To evaluate metastatic properties of Cluster S cells, top 100 differentially upregulated genes ( p < 0.05, log 2 fold‐change >1) of Cluster S were selected for Gene Ontology (GO) and Kyoto Encyclopedia of Genes and Genomes (KEGG) enrichment analysis for overall and individual timepoints (Figure ; Figure , Supporting Information). Results showed that Cluster S from overall integration enhanced cell differentiation and adhesion, increased response to lipid and activated innate immune responses, and upregulating pathways related to cancer immunity (IL‐17, estrogen, and CAMs signaling), tumor metastasis (p53, apelin, and hippo signaling), and angiogenesis (apelin signaling, Figure ). Cluster S at parental stage has enhanced locomotion, cell adhesion, cell motility, and wound healing, as well as signaling pathways related to tumor migration (PI3K‐Akt, ErbB, and p53 signaling), interaction (ECM‐receptor interaction, CAMs signaling), and immunity (leukocyte transendothelial migration; Figure ). In summary, we found that Cluster S possesses an integral function in cancer cell migration and cancer‐related immunity, with stage‐specific functions at different timepoints. Identification of the Potential MIC Signature Genes To further characterize metastatic properties of Cluster S, we investigated differentially upregulated genes in Cluster S and identified 7 representative genes as potential metastasis‐initiating signature (MIS), including CD44 , CST6 , C19orf33 , TACSTD2 , S100A14 , RHOD , and TM4SF1 (see selection criteria in Method). We next examined dynamic population expression patterns of these MIS at each timepoints. Results showed that MIS‐positive cells for all 7 signature genes were mainly detected in Cluster S of parental population (averagely 14%) and were significantly enriched in invasion stages (Figure ). We next confirmed that MISs were highly expressed in Cluster S subpopulation at parental stage (Figure ). Individual MIS genes showed similar spatial distribution patterns in both ensemble (Figure ) and parental stages (Figure ), with exception of CST6 in the ensemble. Since MISs are highly representative of the metastasis initiating Cluster S subpopulation, we hypothesize that expression of these genes is important for maintaining early survival of MICs and facilitating the establishment of pro‐metastasis TME. MISs 1+1 Signature Filtering Defines MICs at Transcriptional Level To identify MICs by less stringent simultaneous co‐expression of MISs, MISs 1+1 filter was used to define MICs. The co‐expression of any 1 gene from Group A (C19orf33, S100A14, and RHOD; count ≥1) and any 1 gene from Group B (CST6, CD44, TM4SF1, and TACSTD2; count ≥1) in a single tumor cell was defined as a MIC (Figure , Supporting Information box). Evaluation of MISs 1+1 filter efficiency in parental population demonstrated satisfactory positive predictive value (81.5%), negative predictive value (99.4%), sensitivity (99.5%), specificity (79.9%), and accuracy (89.0%) representing Cluster S subpopulation (Figure , Supporting Information). Comparison of the spatial distribution patterns of MISs 1+1 positive cells with any MISs positive cells (Figure , Supporting Information) and Cluster S subpopulation (Figure ) at all timepoints verified that similar spatial expression patterns of MISs 1+1 positive cells were highly representative of Cluster S at parental state and those surviving cells at early metastatic states. MIS‐Enriched (CD44 high ) Cells Possess Metastasis‐Initiating Properties To validate potential pro‐surviving and metastasis‐enhancing properties of MICs, MIS‐enriched cells were flow‐sorted by anti‐CD44 antibody (Figure , Supporting Information). ScRNA‐seq data from the mice model verified that high population of CD44‐enriched cells is simultaneously co‐expressing other signatures such as C19orf33, TACSTD2, S100A14, RHOD and TM4SF1 in cell cultures and across different metastasis stages and Cluster groups (Figure , Supporting Information). Bulk RNA sequencing was applied to compare expression profiles between CD44 high and CD44 low cells (Table , Supporting Information) and 113 differentially upregulated genes were identified in CD44 high cells ( Figure ). GO and KEGG analyses of top 100 upregulated genes revealed CD44 high cells are enriched in cell migration, organ development, stress responses, neuron development, and lipid metabolism (Figure ). These results are consistent with our predictions about the metastatic properties and TME remodeling capacities of MICs, suggesting a potential enhancement of survival and metastatic behavior of MICs. To test our prediction about metastatic behaviors of MICs, we performed a series of in vitro functional assays on CD44‐enriched and TACSTD2‐enriched subpopulations. Comparing with CD44 low cells, CD44 high cells exhibited significantly enhanced abilities in wound healing ( p < 0.05), cell migration ( p < 0.01), cell invasion ( p < 0.001), foci formation (anchorage‐dependent, p < 0.05), colony formation in soft agar (anchorage‐independent, p < 0.05), and spheroid formation (self‐renewal, p < 0.05) (Figure ). We also found that CD44 high cells have better anti‐apoptotic ability ( p < 0.05) despite the increased production of reactive oxidative species (ROS) after oxidative stress stimulation ( p < 0.05) (Figure ). Western blot analysis of epithelial (β‐catenin, ZO‐1, and E‐cadherin) and mesenchymal (vimentin, slug, and snail) markers elucidated the quasi‐epithelial‐mesenchymal (quasi‐EM) state of CD44 high cells, having expression of both epithelial and mesenchymal markers simultaneously (Figure ). These results confirm the duality of enhanced migration and invasion abilities as well as anchorage‐dependent proliferation and self‐renewal ability of CD44 high cells, suggesting that MICs may possess high plasticity in epithelial‐mesenchymal transition (EMT) to adapt to dynamic TME upon stimulation. Quantitative PCR (qPCR) was used to compare CD44 high and CD44 low cells to detect expression levels of other three MIS signature markers ( S100A14 , TM4SF1 , and CST6 ) in CD44 high cells. Compared with CD44 low cells, results showed that all three markers were significantly upregulated in CD44 high cells (Figure ), confirming the co‐expression of MISs in MICS at RNA level. In addition, we evaluated functional behaviors of TACSTD2‐enriched cells. Results demonstrated that TACSTD2 high cells significantly enhanced wound healing ( p < 0.01), migration ( p < 0.001), invasion ( p < 0.001), foci formation ( p < 0.05) abilities (Figure , Supporting Information). All these results demonstrated an overall enhanced metastatic and proliferative capacity of the MIS‐enriched subpopulations. MICs Switch from Partial‐Epithelial State to Quasi‐Epithelial‐Mesenchymal State During Early Metastasis Establishment To address EMT state of MICs during metastasis progress, we evaluated scRNA expression of epithelial marker (CTNNB1, gene for β‐catenin) and mesenchymal marker (SNAI1, gene for Snail) across timepoints (Figure , Supporting Information). We defined epithelial (E) state by over 50% population positive (Count>0) of CTNNB1, mesenchymal (M) state by over 10% population positive (Count>0) of SNAI1, and quasi‐epithelial‐mesenchymal (quasi‐EM) state for acquiring both E and M simultaneously based on our data. Our results show that there was a dynamic change of EMT state for cells, from partial‐epithelial (partial‐E) states in parental Cluster S and E state in parental non‐Cluster S to M state at 6 h, quasi‐EM state at 48 h, and returned to E state at 2 months and 4 months. This demonstrated the plasticity of MICs in switching EMT phenotypes upon stimulation by the surrounding environment to adapt and facilitate survival and metastasis colony formation. MIS‐Enriched Subpopulations have Early‐Survival and Metastatic Colonization Properties To evaluate in vivo metastasis‐initiating properties of MICs, we established a mouse pulmonary metastasis model by intravenously inoculating CD44 or TACSTD2 flow‐sorted MIS‐enriched cells into NOD SCID mice. Bioluminescence signals were monitored from 2 h to 2 months in mice injected with CD44 high , CD44 low , and parental KYSE30‐Luc‐GFP. Results found that CD44 high group showed the strongest signals in vivo and in freshly resected lungs at 40 h and 2 months compared with CD44 low and control groups ( Figure ), suggesting that an enhanced abilities of MIS‐enriched cells to survive in new niche (40 h) and establish colonization at later stage (2 months). However, a stronger signal only represents more metastatic cells surviving but cannot account for the number and size of colonies formed. Thus, we performed IHC staining to estimate the number and size of colonies formed and flow cytometry analysis to quantify the number of viable cells. IHC staining showed CD44 high group formed larger (Figure ) and more ( p < 0.05, Figure ) colonies than CD44 low and control groups at 2 months, and accelerated formation of visible metastatic nodules (>200 cells) in lungs at 2 months (Figure ). Flow cytometry analysis of GFP intensity showed that CD44 high group retained more viable tumor cells at both early (40 h, p < 0.05) and late metastatic (2 months, p < 0.05) stages compared with CD44 low group (Figure ), again echoing the large size nodules we observed in the IHC analysis. We next investigated the TACSTD2‐enriched subpopulation. Signature co‐expression analyses in scRNA‐seq data validated TACSTD2‐enriched cells have similar expression pattern as CD44‐enriched cells in culture and across different metastasis stages and Cluster groups. (Figure , Supporting Information). Similar as CD44 high group, TACSTD2 high group retained stronger bioluminescence signals in lungs (Figure , Supporting Information) and formed larger sized colonies (Figure , Supporting Information) at 2 months, compared with MIS low groups, though the difference in number of tumor sites formed was not statistically significant (Figure , Supporting Information). Taken together, MIS‐enriched subpopulations possess enhanced in vivo metastatic capacities for early survival and late colonization in lungs. Detection of MICs using MIS Markers by Multiplex IHC (mIHC) To test the potential of MISs in MIC detection, mIHC staining was used to detect the co‐expression patterns of 4 selected MISs (CD44, S100A14, RHOD, and TACSTD2) in four ESCC cell lines (KYSE30, KYSE180, KYSE410, and KYSE520). It was found that among four cell lines, ≈10–25% and 20–40% of tumor cells co‐expressed 4 and 3 MISs, respectively (Figure , Supporting Information), indicating the prevalence of MIC subpopulation in ESCC. We next introduced mIHC staining into a pulmonary metastasis mouse model to investigate expression pattern of MISs in mouse lungs along the metastatic timeline and elucidate its correlation with tumor cell survival and metastasis in vivo ( Figure ). We found that most single cells did not express MISs 6 h after inoculation, only a few subpopulations displayed co‐expression, which was similar to parental cells. Interestingly, most cells without MIS expression died of survival crisis within 2 days, whereas most surviving colonies co‐expressed MISs at 2 days and 1 week (Figure ). More MISs co‐expressing tumor cells propagated in metastatic colonies at 2 and 4 months forming larger nodules, suggesting that MISs co‐expression is essential for overcoming early survival crisis and subsequent establishment of metastatic colonization. MISs Co‐Expression Scores Predict Patient Outcomes We next examined MIS expression in clinical samples and evaluated the applicability of MIS co‐expression scores in predicting the prognosis of ESCC patients. mIHC staining using 4 selected MISs with high quality antibodies was performed in an ESCC tissue microarray (TMA) containing 244 cases with primary tumors (PT), 57 of which has paired lymph node metastasis (LNM) ( Figure ; Tables and , Supporting Information). We first calculated individual signature scores, co‐expressing signature scores for PT and paired LNM nodules, and differential individual and co‐expressing signature scores for each patient comparing PT and LNM scores. We then performed statistical analyses of these scores and corresponding patient outcomes. High scores for single TACSTD2 expression ( p < 0.01; p = 0.039) and S100A14‐TACTSD2 co‐expression ( p < 0.05; p = 0.047) in primary tumors were significantly associated with both presence of LNM (N1 staging) and poorer overall survival (OS) in patients (Figure , Supporting Information), respectively. Comparing LNM with PT, high differential scores of single RHOD expression (R = 0.378, p = 0.006; p = 0.015), single TACSTD2 expression (R = 0.356, p = 0.010; p = 0.029), and S100A14‐TACSTD2 co‐expression (R = 0.353, p = 0.011; p = 0.03) were positively associated with increased LNM ratio (positive LN ratio) and poorer OS (Figure , Supporting Information), respectively. These results demonstrate that the value of individual or dual signature expressions in predicting LNM and OS, although only a portion MISs panel was used. Therefore, we next formulated a differential MISs score (dMISs) by comparing LNM scores with PT scores, performing co‐expression score calculations using all four MISs, and then analyzed correlations with patient outcomes. We calculated dMISs for each patient and performed Cox regression to classify patients into dMISs high and dMISs low groups (for example, Figure ). We found that patients with higher dMISs correlates to poorer OS ( p = 0.023), increased LNM ratio (positive LN ratio; R = 0.363, p = 0.009), higher grade of poorer tumor differentiation ( p = 0.039), and increased chances to develop carcinothrombosis ( p < 0.01; Figure ). These findings suggest increased co‐expression of MISs in lymph node tissues results in higher dMISs compared with PT and predicts worse patient prognosis. Discussion There is an urging demand to fill the missing early metastatic single‐cell expression profiles in ESCC, especially the identification of MICs and their corresponding expression heterogeneities, to develop a panel of MISs targeting MICs and preventing metastasis. In the present study, we successfully established a multi‐timepoint pulmonary metastasis mouse model to monitor the in vivo metastasis timeline and collected single‐cell transcriptome profiles of living metastatic tumor cells harvested from mouse lungs at different timepoints. Despite previous theoretical assumptions about metastatic sequential events from invasive cell seeding to metastatic colonization in various cancer types, actual demonstration of this sequential pattern experimentally is incomplete. Here, we first visualized the in vivo survival patterns of metastatic cells in lungs from a single colony to metastatic colonization. Next, we identified a dynamic MIC subpopulation (Cluster S) from scRNA‐seq cluster classification and revealed its unique transcriptome. We selected a panel of 7 genes as MIS to represent MICs and developed a MISs 1+1 filter to recognize these cells at transcriptional level. By performing GO and KEGG analyses of scRNA‐seq date from Cluster S and bulk RNA‐seq data from MIS‐enriched subpopulation (CD44 high ), we reconfirmed the metastatic properties and TME remodeling capacities of surviving cells at early metastasis stage, via embryonic‐like tumor implantation and neuro‐immuno‐oncology remodeling approaches. Increasing number of studies have been investigating the role of neuro‐immuno‐oncology interactions as well as neurovascular coupling in cancer progression and metastasis, which raised the importance of involvement of nervous systems and signaling components such as neurotrophic factors, neurotransmitters, and neuroendocrine in shaping the TME. Multiple studies in other cancers, such as breast cancer, prostate cancer, pancreas cancer, gastric cancer, lung cancer, colon and rectum cancer, and head and neck cancer, have reported the cancer promoting effect of innervation, which refers to the cancer‐associated neural migration and neurogenesis that leads to increased nerve intensity, showing potential positive feedback loops in cancer progression and metastasis. However, such studies in ESCC are limited, and currently all of them have been investigated through perineural invasion rather than cancer innervation. Thus, more experimental evidence is needed to verify the role of nervous system in ESCC progression and metastasis. In terms of the isolation of MISs‐enriched subpopulation, our study has demonstrated through a variety of in vitro functional assays and in vivo metastatic mouse models that the MIS‐enriched subpopulation may represent MICs subpopulation in ESCC with enhanced migration, invasion, proliferation, self‐renewal abilities, and resistances to anoikis, oxidative stress, and apoptosis. We illustrated that MICs are equipped with both early survival and metastasis colonizing abilities to overcome two crises along the metastatic timeline. Interestingly, we also confirmed the quasi‐EM state of MIS‐enriched subpopulation, expressing both epithelial and mesenchymal markers simultaneously in scRNA‐seq data and cell line western blotting analyses. MIS‐enriched subpopulation demonstrates both epithelial and mesenchymal properties and behaviors via our analyses on functional assays revealing enhanced migrating ability, and metastatic mouse model revealing enhanced anchorage and proliferation abilities. Similar results previously reported in colorectal cancer, breast cancer, prostate cancer, and lung cancer suggest the presence of quasi‐EM phenotypes and spontaneous epithelial‐mesenchymal plasticity (EMP) states, referring to the partial state between complete epithelial and complete mesenchymal transitions (pEMT) in small subpopulations. Such quasi‐epithelial‐mesenchymal‐transition (quasi‐EMT) state could be readily activated to transit toward complete epithelial or mesenchymal states upon stimulation as a switch. While in this study, we further evidenced the presence of quasi‐EM states of metastatic cells through the multi‐timepoint animal model. There was a dynamic shift of EMT states of MIC (Cluster S) subpopulation from partial‐E state resting in the parental population to M state of in vivo surviving cells at very early metastasis (6 h) and then to quasi‐EM at 48 h, and returning to E state at later metastasis progression (2 months and 4 months). This suggested that MICs are readily transformable from partial‐E state to M and quasi‐EM states upon stimulation from change of microenvironment for better adaptation and establishment of metastasis colonies. The M to quasi‐EM to E state dynamics also revealed sequential involvement of migration and invasion ability at the very beginning of metastasis to move out from the original tumor niche and enter the new niche, which displayed as M state; besides invading, anchorage ability was also required at slightly later stage of early metastasis to settle down, displayed as quasi‐EM state; and finally proliferation ability dominates to colonize at late metastasis stage, displayed as E state. This suggests the high EMP of MICs facilitates the achievement of intrusion of new niche and metastasize at more mesenchymal‐like state, while achieving stationary‐dependent proliferation, TME remodeling, or dormancy entry at more epithelial‐like state. Our findings shed light for future extended investigation of potential detection of quasi‐EMT states in clinical samples and evaluate their predictivity in patient prognosis. Our study has depicted the specific MICs subpopulation in ESCC via multiplex staining of the simultaneous co‐expression of our MISs panel (i.e., CD44, S100A14, RHOD, and TACSTD2) in cell lines and illustrated the enrichment of MICs in metastatic mice lungs along timeline. We also put forward our MISs to develop differential MIS (dMIS) scores using PT and LNM TMA tissues of ESCC patients to successfully predict poorer patients’ OS, increased LNM ratio, poorer tumor differentiation grade, and increased occurrence of carcinothrombosis with higher dMIS score. Of note, the intravenous injection‐based metastasis mouse model mimics the presence of tumor cells in the circulation and allows evasion of vessels which is partially similar to that in patients with carcinothrombosis, which has a higher chance of shedding tumor cells along the circulation and finally evades from bloodstream and settle at the distal organ. Though the model was based on blood vessel metastasis that is different from lymph vessels, dMIS score successfully predicts both carcinothrombosis and lymph node metastasis, patients with carcinothrombosis have significantly higher dMIS, and high dMIS correlates to an increased number of tumor‐positive lymph node detected. These demonstrated the potential common involvement of the signature genes in intravasation and extravasation processes of both blood and lymph vessels. Transferable clinical applications of our MIS panel could aid diagnosis, adjust therapeutic treatments and improve awareness of the potential development of metastasis and vascular obstruction. Limitations of the Study Mouse model used in this study was immune compromised NOD SCID mice with human ESCC cells inoculated to characterize genes upregulated in human ESCC MICs. The adapted immune system is compromised, B cells and T cells were present but non‐functional. Therefore, only the effect of innate immune system on tumor cells could be examined, while investigation in adapted immune system was not applicable in this study. Further validations in immune competent mice with the proposed signature panel from this study would confirm the effect of the adaptive immune system on the MICs. The scRNA‐seq utilized solely tumor cells retrieved from different metastatic stages at a multi‐timepoint manner, the direct cell‐cell interaction with the surrounding TME could not be investigated, interaction and response to surrounding TME was only investigated through identification of differentially expressed genes, GO enrichment of biological process, and KEGG pathway analyses. Future investigation could be done to include both early metastasizing ESCC cells and surrounding cells in the TME to provide a more in depth examination in MIC‐TME early metastasis interactions. Conclusion In summary, our study first successfully embodied the stage‐specific expression profiles of ESCC cells along the metastatic progression timeline to fill in the missing puzzles, identified and verified specific quasi‐mesenchymal MICs subpopulations with a MIS panel and revealed the potential early‐stage metastasis neuro‐immuno‐oncological TME remodeling approaches that facilitates metastasis colonization. We also utilized this MIS panel to predict ESCC patient outcomes and look forward to further refining it to clinical practice. The authors declare no conflict of interest. C.N.W. and Y.Z. contributed equally to this work as co‐first author. C.N.W. conceived the study, coordinated, and performed the experiments, conducted data analyses and bioinformatics analyses, and wrote the manuscript. Y.Z. performed related experiments. B.R. conducted scRNA‐seq data analyses. S.W. assisted in related experiments and data analyses. H.Z. assisted data analyses. J.L. assisted in related experiments. Y.L. assisted data analyses. Y.Q. provided clinical samples. P.J. assisted data analyses. V.H.‐F.L. supervised the study. X.‐Y.G conceived and supervised the study. All the authors have read and approved the manuscript. Supporting Information |
NGS-guided precision oncology in metastatic breast and gynecological cancer: first experiences at the CCC Munich LMU | f6fdf65e-9380-4c1c-b4bc-f12f062d6a28 | 8053190 | Pathology[mh] | In women, metastatic breast cancer and gynecological malignancies are among the most frequent causes of cancer death. In 2018, there were an estimated 2,088,849 new cases of breast cancer and 626,679 deaths, 569,847 new cases of cervical cancer and 311 365 deaths, and 295,414 new cases of ovarian cancer and 184,799 deaths worldwide. Despite rising overall incidence, mortality rate has steadily decreased owing to early detection and improvements in the therapeutic management of these patients. However, although the development of new drugs, vaccines, and systematic screening programs has improved patients’ outcomes, effective measures to successfully treat metastatic cancer are still missing. With the advent of molecular diagnostics, cancer treatment entered a new era. New techniques of sequencing DNA such as comprehensive genomic profiling (CGP) and hotspot next generation sequencing (NGS) provide tools for deciphering complete genes and later entire genomes at unprecedented speed . These new approaches led to the development of a novel cancer treatment movement, known as precision medicine. By selecting the most effective treatment based on the molecular characteristics of tumor tissues or some other biologic parameters of the malignant disease, precision medicine aims to offer personalized treatment concepts to cancer patients with limited standard of care options. Molecular therapeutic agents (MTA) targeting individual actionable molecular alterations have been successfully developed in the past few years, showing the positive impact of using molecular-based therapy on the cancer patients’ outcome . These include the use of growth factor receptor 2 antibody trastuzumab in breast cancer, a tyrosine kinase inhibitor imatinib in myelogenous leukemia associated with the BCR-ABL fusion gene and EGFR tyrosine kinase inhibitors in lung carcinomas . Breast and gynecological cancers constitute a heterogeneous group of malignant diseases associated with multiple genetic alterations . In the past few years, a growing number of molecular markers in breast cancer, for example, have been investigated and some of them are now well-established as reliable predictors of prognosis and response to tumor therapy (Fig. a). Moreover, many different targeted therapies have been approved for use in breast cancer treatment (Fig. b). The recent approval of the PIK3CA specific inhibitor alpelisib has been the most recent example of targeted agents moving into routine care. Treatment with alpelisib was shown to prolong PFS by more than 6 months compared to the control arm. In gynecologic malignancies, MTAs have also been successfully implemented into clinical care. For example, early data from a clinical phase II trial focusing on BRCA-mutated ovarian cancer showed that olaparib as maintenance treatment significantly improved progression-free survival (PFS) in relapsed platinum-sensitive ovarian cancer . In 2018, these data could be transferred to the first line setting when treatment effects of the SOLO1 trial were presented . Due to an impressive PFS improvement and a 70% lower risk of disease progression or death with olaparib compared to placebo, this effect led to the incorporation of PARP inhibitors into the primary treatment of ovarian cancer in 2019 . However, when it comes to other gynecologic malignancies such as endometrial cancer, the development of MTA is delayed in comparison to other malignancies. By detecting potential actionable pathways using molecular diagnostics, it is also possible to assess and treat various cancer types. For example, the ERBB2/PIK3/AKT/mTOR pathway is known for its relevance in breast cancer, but recently a relevant actionable mutation from the same pathway, PIK3R1 W624R was also identified in ovarian cancer . Another study suggested that some subtypes of cervical cancers may also benefit from existing ERBB2/PIK3/AKT/mTOR targeted agents . With the rising number of MTAs and considering the heterogeneous molecular profiles of breast cancer and gynecological malignancies, it is reasonable to expect that patients with these malignancies could potentially benefit from implementation of precision oncology based on comprehensive genomic profiling (CGP) into clinical care. Promising early data for such malignancies has been presented in multiple trials. In breast cancer, many reports of such driver alterations have emerged in the past few years, suggesting that patients could profit from precision medicine and targeted therapies . For example, in the SAFIR01 multicenter prospective trial, data of precision medicine benefitting breast cancer patients were presented. 9 out of 43 patients (21%) responded to the recommended targeted therapy with a stable disease lasting over 16 weeks . In ovarian cancer, multiplatform molecular profiling, conducted in a commercially available profiling center, led to a significantly longer post-profiling survival in patients, who were treated with profile-guided targeted agents, in comparison to the control group . With the technical advances in molecular diagnostics and the continuous approval of many targeted therapies, the growing field of precision medicine is constantly expanding and requires optimization. Considering the complexity of precision medicine in oncology, it was reasonable to create a molecular tumor board (MTB) to leverage the knowledge of the many different disciplines involved in oncological treatment and to provide optimal treatment recommendations. In this manuscript, first experiences of the Comprehensive Cancer Center (CCC) LMU Munich Molecular Tumor Board are presented. The aim of this project was to retrospectively measure the impact of MTB discussions and recommendations made by a multidisciplinary tumor board on outcome of patients with breast and gynecological cancers progressing under standard treatment. Detailed information including data on patient characteristics, diagnostic and treatment recommendations, implementation of the recommendations, and outcome of treated patients with breast and gynecological cancers (ovarian, endometrial, cervix, and other type of cancer) are presented. All patients reported here were discussed in the local MTB, which reviewed clinical cases and the respective tumor profiles with the associated actionable alterations. The final result of each MTB case discussion was a report, focused on NGS data and diagnostic and potential diagnostic, and therapeutic alternatives. Thereby, the MTB presented itself as a multidisciplinary team (MDT), which comprised clinical oncologists, pathologists, molecular pathologists, genetic counselors, bioinformaticians, and scientists with expertise in genetic and tumor profiling in diverse cancers. MTB-meetings were held every 2 weeks with the purpose of interpretation and/or translation of the molecular diagnostics’ results into diagnostic and/or treatment recommendations. All patients’ cases were first presented at organ-specific gynecology tumor boards by a team of experienced gyneco-oncologists, who reviewed all the clinical course of every individual patient and discussed if patients were eligible for a MTB discussion. Apart from recent tumor material, recent radiology images and other diagnostic tests were also required for the interdisciplinary setting of the MTB. All treatment recommendations were supported by levels of evidence by using the ESMO Scale for Clinical Actionability of molecular Targets (ESCAT). The process from enrolling the patient into the study till receiving a recommendation by the MTB is shown in Fig. . Patients and patient informed consent All patients discussed ( n = 95) were included in the prospective single-center case study, “The informative Patient”, launched in March 2017 at the LMU University Hospital, Munich as a Munich-site part of the DKTK (German Cancer Consortium) program. All enrolled patients suffered from metastatic breast or gynecological cancer which had progressed after at least one line of prior standard treatment and who had no longer access to curative treatment. Prior to inclusion, all participants signed an informed consent that they were informed about potential and limitations that molecular diagnostics could offer for treatment selection and for analysis of their data, further discussion of their case by a multidisciplinary MTB, as well as for collecting follow-up data on the course of disease for research purpose (including requesting patient data from other physicians and institutions). The intention-to-treat (ITT) population consisted of 100 patients. Eventually, five patients were excluded, because of death prior to a treatment recommendation or withdrawal of consent. The data here are based on the results of an ITT population of 95 patients. Molecular pathology Molecular analyses were performed at the Institute of Pathology of the LMU. Appropriate tissue regions were selected histo-morphologically from formalin-fixed paraffin embedded (FFPE)- or fresh frozen tissue. Moreover, liquid biopsies (blood, liquor) were included. In only four patients, analysis had to be repeated due to material constraints. Targeted NGS was performed with the Oncomine Comprehensive Cancer v.3 Panels (Agilent) thereby screening for changes in 161 genes on DNA (SNV, MNV, small ins, del, indels, CNV) and RNA (gene fusions) level. DNA and RNA were isolated using Qiagen's GeneRead DNA FFPE- or RNeasy FFPE-kits, respectively. Nucleic acids (NA; DNA, and RNA) from liquid biopsies were prepared by utilization of the QIAamp Circulating Nucleic Acid Kit. Subsequently, library preparation as first step of NGS was generated by employing Ampliseq Library Plus-, Ampliseq cDNA synthesis-, Ampliseq CD index, Ampliseq Equalizer- together with Ampliseq Comprehensive v3-kits (all Illumina) or DNA- and RNA-Oncomine Comprehensive Panels v3 and Ion AmpliSeq Library-, IonXpress Barcode Adapter-, Ion Library Equalizer-kits together with Ion Chip kits (mostly 550) (all Thermo Fisher), following for each step the respective user manuals. Libraries were run on an Ion Torrent GeneStudio S5 Primer (Thermo Fisher) or Illumina 500 Next Seq (Illumina) NGS machine. Analysis of results was performed with either the Ion-Reporter System (Thermo Fisher) followed by further variant and quality interpretation with a self-made excel tool or annotating VCF-files using wAnnovar ( http://wannovar.wglab.org/ ) together with the self-made python-script PathoMine filtering for clinically relevant mutations. Mutations were judged as relevant on the basis of the key 'interpretation' given in ClinVar . Alterations were confirmed with the Integrated Genomics Viewer (IGV, Broad Institute). The resulting molecular pathological dataset together with data from immunohistochemistry, fluorescence in situ hybridization (FISH), and histo-morphology became part of a comprehensive pathological report which was sent out to the MTB. Data assessment For this analysis, electronic medical records were reviewed for patient characteristics and follow-up. If needed, medical oncologists, gynecologists, and general practitioners were contacted in order to collect follow-up data on treatment course and patient status. Patient characteristics were summarized using descriptive statistics. Follow-up of clinical outcomes was performed to track tumor response to recommended therapies and analyzed by measuring progression-free survival (PFS) of patients, who received the recommended treatment. PFS was calculated from the first day of treatment with the recommended in- or off-label targeted drug until the date of disease progression or death, whichever occurred first, analogous to the Johns Hopkins MTB study and to the Von Hoff et al. study . In order to evaluate the benefit of the treatment recommendation, we then calculated the PFS ratio (PFSr) by comparing the PFS of the recommended treatment and the PFS of the previous therapy of the patients. Cut-off date for data analysis was August 1st, 2019. All patients discussed ( n = 95) were included in the prospective single-center case study, “The informative Patient”, launched in March 2017 at the LMU University Hospital, Munich as a Munich-site part of the DKTK (German Cancer Consortium) program. All enrolled patients suffered from metastatic breast or gynecological cancer which had progressed after at least one line of prior standard treatment and who had no longer access to curative treatment. Prior to inclusion, all participants signed an informed consent that they were informed about potential and limitations that molecular diagnostics could offer for treatment selection and for analysis of their data, further discussion of their case by a multidisciplinary MTB, as well as for collecting follow-up data on the course of disease for research purpose (including requesting patient data from other physicians and institutions). The intention-to-treat (ITT) population consisted of 100 patients. Eventually, five patients were excluded, because of death prior to a treatment recommendation or withdrawal of consent. The data here are based on the results of an ITT population of 95 patients. Molecular analyses were performed at the Institute of Pathology of the LMU. Appropriate tissue regions were selected histo-morphologically from formalin-fixed paraffin embedded (FFPE)- or fresh frozen tissue. Moreover, liquid biopsies (blood, liquor) were included. In only four patients, analysis had to be repeated due to material constraints. Targeted NGS was performed with the Oncomine Comprehensive Cancer v.3 Panels (Agilent) thereby screening for changes in 161 genes on DNA (SNV, MNV, small ins, del, indels, CNV) and RNA (gene fusions) level. DNA and RNA were isolated using Qiagen's GeneRead DNA FFPE- or RNeasy FFPE-kits, respectively. Nucleic acids (NA; DNA, and RNA) from liquid biopsies were prepared by utilization of the QIAamp Circulating Nucleic Acid Kit. Subsequently, library preparation as first step of NGS was generated by employing Ampliseq Library Plus-, Ampliseq cDNA synthesis-, Ampliseq CD index, Ampliseq Equalizer- together with Ampliseq Comprehensive v3-kits (all Illumina) or DNA- and RNA-Oncomine Comprehensive Panels v3 and Ion AmpliSeq Library-, IonXpress Barcode Adapter-, Ion Library Equalizer-kits together with Ion Chip kits (mostly 550) (all Thermo Fisher), following for each step the respective user manuals. Libraries were run on an Ion Torrent GeneStudio S5 Primer (Thermo Fisher) or Illumina 500 Next Seq (Illumina) NGS machine. Analysis of results was performed with either the Ion-Reporter System (Thermo Fisher) followed by further variant and quality interpretation with a self-made excel tool or annotating VCF-files using wAnnovar ( http://wannovar.wglab.org/ ) together with the self-made python-script PathoMine filtering for clinically relevant mutations. Mutations were judged as relevant on the basis of the key 'interpretation' given in ClinVar . Alterations were confirmed with the Integrated Genomics Viewer (IGV, Broad Institute). The resulting molecular pathological dataset together with data from immunohistochemistry, fluorescence in situ hybridization (FISH), and histo-morphology became part of a comprehensive pathological report which was sent out to the MTB. For this analysis, electronic medical records were reviewed for patient characteristics and follow-up. If needed, medical oncologists, gynecologists, and general practitioners were contacted in order to collect follow-up data on treatment course and patient status. Patient characteristics were summarized using descriptive statistics. Follow-up of clinical outcomes was performed to track tumor response to recommended therapies and analyzed by measuring progression-free survival (PFS) of patients, who received the recommended treatment. PFS was calculated from the first day of treatment with the recommended in- or off-label targeted drug until the date of disease progression or death, whichever occurred first, analogous to the Johns Hopkins MTB study and to the Von Hoff et al. study . In order to evaluate the benefit of the treatment recommendation, we then calculated the PFS ratio (PFSr) by comparing the PFS of the recommended treatment and the PFS of the previous therapy of the patients. Cut-off date for data analysis was August 1st, 2019. Patient characteristics From March 2017 through March 2019, a total of 95 cases were submitted to the MTB. All patients ( n = 95) were females, had an underlying malignant condition, suffered from metastatic disease, and had experienced disease progression under standard treatment. Patients with implemented therapy recommendations had received a median of five (range 2–6) prior therapies for metastatic cancer. The median age at time of the initial MTB presentation was 52 years (range 19–82 years). As shown in Fig. , the most frequent tumor type was breast cancer ( n = 64, 68%), followed by ovarian cancer ( n = 19, 20%). The majority of patients with breast cancer had triple-negative (ER, PR and HER2 negative; n = 30; 46.9%), followed by estrogen receptor (ER) -positive and/or progesterone receptor (PR) -positive, human epidermal growth factor receptor 2 (HER2) -negative (luminal-like) (n = 28; 43.8%), or HER2 positive, ER-negative, PR-negative disease (n = 5; 7.8%) at the time of the MTB case discussion; one patient (1.6%) had triple-positive disease (ER positive and/or PR positive, HER2 positive). Characteristics of patients with a molecular profile are reported in Table . Molecular profiling Molecular tests using NGS were performed for all 95 patients. Out of the set of mutations from the molecular pathological NGS-analysis, actionable mutations were defined as those matching or informing the use of available targeted agents. Four patients had tumor sequencing performed twice during the course of disease. 81 (85.3%) patients had suitable tissues for multimodal molecular profiling (NGS). All in all, 103 molecular alterations were identified in 55 cases (57.9%). The median number of alterations observed in each sample was one (range 0–6). Out of the 55 patients, 41 (43.2%) had an actionable mutation, which the board reviewed as a potentially targetable. No genomic alterations in the 161 investigated genes were found in 40 (42.1%) analyses, in 14 (14.7%) of which the molecular diagnostics test was technically not successful because of poor DNA quality or insufficient material quality. Although five (5.3%) patients had an actionable mutation, they did not receive a therapy recommendation because of co-morbidities, not meeting trial inclusion criteria, or other requirements for receiving a specific targeted therapy. We discovered mutations in over 30 different genes. Among the patients tested, the most common alterations were as follows: PIK3CA mutation (13/95; 13.7%); ERBB2 mutation (10/95; 10.5%); KRAS mutation (9/95; 9.5%), and CCND1 mutation (9/95; 9.5%). Incidences of genomic alterations by gene and the distribution of molecular alterations by tumor type are shown in Fig. . Recommendations Among the 55 (57.9%) patients with at least one molecular alteration identified, 41 patients (43.2%) had an actionable alteration, whereas 14 (14.7%) had only non-actionable variants. Eventually, this resulted in 15 diagnostic and 49 treatment recommendations for 45 patients (47.4%). Multiple recommendations were adjusted for 20 (21.1%) patients (multiple recommendation principle). Six patients received a conditional recommendation, which required specific further diagnostics, two of which resulted in a treatment recommendation. Diagnostic recommendations Out of 15 diagnostic recommendations, 10 were pursued. In seven (7.4%) cases, extended genetic analyses were recommended and eventually six (6.3%) of them were performed. Re-biopsies were recommended in 14 cases, when the initial diagnostic tests were technically not successful, which we did not include in the evaluation of the final results. Therapeutic recommendations As shown in Fig. , 36 (37.9%) patients were given a therapy recommendation, 14 (14.7%) of whom received more than one treatment suggestion, as their tumor molecular profile revealed more than one actionable mutation. Two (2.1%) patients were excluded from the evaluation of the clinical outcome, as they received the recommended therapy in the period between NGS analysis and MTB treatment recommendation. Overall, 9 of 34 therapeutic recommendations were pursued. Of note, in the present cohort, no patient pursued the recommended enrollment in a clinical trial. In-label therapy recommendations were implemented in five cases, whereas off-label recommendations were implemented in four patients. The most common reasons for non-administration of MTB-recommended therapy were deterioration of patients’ physical health condition, early death, no access to the recommended drug therapy, declined reimbursement applications by payer, or patient decision (see Table ). Clinical outcome All patients were included in the registry after multiple standard of care treatments. Out of nine (9.5%) patients following therapy recommendation, 4 (4.2%) showed a state of partial remission or stabilization lasting more than 16 weeks, including two of them receiving off-label therapy recommendation. Comparing PFS of the recommended therapy with the PFS of the previously received systemic treatment, we estimated that four of nine responders receiving MTB-recommended therapies displayed a progression-free survival (PFS) ratio (PFS2/PFS1; PFSr) > 1.3, showing the relevance of the suggested therapies. Two patients responded with an ongoing PFSr. Figure details the actual comparison of PFS on implemented recommended treatment versus PFS on the patient’s last prior treatment. More information about the outcome of responding patients is shown in Table . See Appendix for details of identified actionable mutations and corresponded treatment recommendations made by the MTB. From March 2017 through March 2019, a total of 95 cases were submitted to the MTB. All patients ( n = 95) were females, had an underlying malignant condition, suffered from metastatic disease, and had experienced disease progression under standard treatment. Patients with implemented therapy recommendations had received a median of five (range 2–6) prior therapies for metastatic cancer. The median age at time of the initial MTB presentation was 52 years (range 19–82 years). As shown in Fig. , the most frequent tumor type was breast cancer ( n = 64, 68%), followed by ovarian cancer ( n = 19, 20%). The majority of patients with breast cancer had triple-negative (ER, PR and HER2 negative; n = 30; 46.9%), followed by estrogen receptor (ER) -positive and/or progesterone receptor (PR) -positive, human epidermal growth factor receptor 2 (HER2) -negative (luminal-like) (n = 28; 43.8%), or HER2 positive, ER-negative, PR-negative disease (n = 5; 7.8%) at the time of the MTB case discussion; one patient (1.6%) had triple-positive disease (ER positive and/or PR positive, HER2 positive). Characteristics of patients with a molecular profile are reported in Table . Molecular tests using NGS were performed for all 95 patients. Out of the set of mutations from the molecular pathological NGS-analysis, actionable mutations were defined as those matching or informing the use of available targeted agents. Four patients had tumor sequencing performed twice during the course of disease. 81 (85.3%) patients had suitable tissues for multimodal molecular profiling (NGS). All in all, 103 molecular alterations were identified in 55 cases (57.9%). The median number of alterations observed in each sample was one (range 0–6). Out of the 55 patients, 41 (43.2%) had an actionable mutation, which the board reviewed as a potentially targetable. No genomic alterations in the 161 investigated genes were found in 40 (42.1%) analyses, in 14 (14.7%) of which the molecular diagnostics test was technically not successful because of poor DNA quality or insufficient material quality. Although five (5.3%) patients had an actionable mutation, they did not receive a therapy recommendation because of co-morbidities, not meeting trial inclusion criteria, or other requirements for receiving a specific targeted therapy. We discovered mutations in over 30 different genes. Among the patients tested, the most common alterations were as follows: PIK3CA mutation (13/95; 13.7%); ERBB2 mutation (10/95; 10.5%); KRAS mutation (9/95; 9.5%), and CCND1 mutation (9/95; 9.5%). Incidences of genomic alterations by gene and the distribution of molecular alterations by tumor type are shown in Fig. . Among the 55 (57.9%) patients with at least one molecular alteration identified, 41 patients (43.2%) had an actionable alteration, whereas 14 (14.7%) had only non-actionable variants. Eventually, this resulted in 15 diagnostic and 49 treatment recommendations for 45 patients (47.4%). Multiple recommendations were adjusted for 20 (21.1%) patients (multiple recommendation principle). Six patients received a conditional recommendation, which required specific further diagnostics, two of which resulted in a treatment recommendation. Out of 15 diagnostic recommendations, 10 were pursued. In seven (7.4%) cases, extended genetic analyses were recommended and eventually six (6.3%) of them were performed. Re-biopsies were recommended in 14 cases, when the initial diagnostic tests were technically not successful, which we did not include in the evaluation of the final results. As shown in Fig. , 36 (37.9%) patients were given a therapy recommendation, 14 (14.7%) of whom received more than one treatment suggestion, as their tumor molecular profile revealed more than one actionable mutation. Two (2.1%) patients were excluded from the evaluation of the clinical outcome, as they received the recommended therapy in the period between NGS analysis and MTB treatment recommendation. Overall, 9 of 34 therapeutic recommendations were pursued. Of note, in the present cohort, no patient pursued the recommended enrollment in a clinical trial. In-label therapy recommendations were implemented in five cases, whereas off-label recommendations were implemented in four patients. The most common reasons for non-administration of MTB-recommended therapy were deterioration of patients’ physical health condition, early death, no access to the recommended drug therapy, declined reimbursement applications by payer, or patient decision (see Table ). All patients were included in the registry after multiple standard of care treatments. Out of nine (9.5%) patients following therapy recommendation, 4 (4.2%) showed a state of partial remission or stabilization lasting more than 16 weeks, including two of them receiving off-label therapy recommendation. Comparing PFS of the recommended therapy with the PFS of the previously received systemic treatment, we estimated that four of nine responders receiving MTB-recommended therapies displayed a progression-free survival (PFS) ratio (PFS2/PFS1; PFSr) > 1.3, showing the relevance of the suggested therapies. Two patients responded with an ongoing PFSr. Figure details the actual comparison of PFS on implemented recommended treatment versus PFS on the patient’s last prior treatment. More information about the outcome of responding patients is shown in Table . See Appendix for details of identified actionable mutations and corresponded treatment recommendations made by the MTB. We evaluated the clinical consequences of actionable genetic alterations (by NGS) in 95 patients with metastatic breast cancer and gynecological malignancies, part of a pilot monocentric patient registry with the purpose of generating real-world data. Forty-one patients (43.2%) had at least one actionable molecular aberration. The total number of patients with a drug-targetable alteration was 34 (35.7%). Overall, 9 of 34 patients (9.5% of all) received the recommended drug treatment. In a small, but significant group of patients, four out of nine with implemented therapy recommendations (44.4%) experienced a clinical benefit (PFSr > 1.3) lasting over 16 months, a result similar to the one shown by Jameson et al. in cases of patients with metastatic breast cancer, who received personalized therapy recommendations based on multi-omic molecular profiling . Precision medicine offers not only personalized treatment concepts for patients, but also helps us optimize diagnostic and treatment options by identifying biomarkers that are linked to response and resistance to immunotherapy. For instance, in the past few years, the problem of resistance to endocrine therapy has been a point of research. Recently, the key role of the acquisition of ligand-independent ESR1 mutation in breast cancer as a common mechanism of resistance to hormonal therapy was discovered . So far, the precision medicine movement is controversial and has sparked multiple debates. On the one hand, the SHIVA trial (2015), one of the first randomized investigation of precision therapy, was negative for its primary endpoint (progression-free survival [PFS]), as no statistically significant difference in PFS between patients receiving molecularly targeted agents and the control arm was demonstrated . On the other hand, studies recruiting large number of patients, such as MOSCATO 01 (2017) and ProfiLER (2017), suggested that high-throughput genomic analyses (i.e. next-generation sequencing, comprehensive genomic profiling) improve clinical outcome in patients with advanced cancers. However, this approach has only been proven to be beneficial to a small subset of patients so far . As shown in Table , studies focusing on precision medicine show different, contradictory results. While in some studies more than 20% of the enrolled patients received the recommended according to molecular profiling treatment, in others the number of patients treated remains very low. These results suggest the need for large data collections in order to improve selection criteria and identify markers that discriminate patients that might benefit most from precision medicine. Although molecular targeted agents themselves are more precise than standard cytotoxic agents, clinical evidence for a significant better outcome associated with MTAs is still missing, as the access to targeted therapies remains limited, making collecting data regarding their efficacy difficult. In order to achieve their implementation in clinical care, a re-assessment of the standards of evidence sufficient to prove the benefit of precision cancer therapies is needed . New evidence suggests that appropriately conducted real-world data studies have the potential to support regulatory decisions in the absence of RCT data . Based on initial results of the CCC LMU Munich, patients of various tumor entities benefit from extended molecular diagnostics and their implementation in clinical care . Recently, many studies have described the positive effect of MTB case discussions for particular groups of patients with advanced solid cancers. However, there is not enough evidence for the utility of MTB decisions for patients with breast and gynecological malignancies. The world of precision medicine is constantly evolving, and new targeted therapies are being developed and approved, enabling more and more patients (with up to this point of time not actionable mutation) to receive targeted therapies. For example, in spring 2019, the Food and Drug Administration of the USA (FDA) approved the PIK3CA inhibitor alpelisib in combination with endocrine therapy for patients with HR-positive, HER2-negative, PIK3CA-mutated, advanced or metastatic breast cancer. The availability of this drug after start of the Managed Access Program in our clinic could have resulted in five further therapy recommendations in our MTB cohort, showing the need of identifying such alterations in cancer patients. The rising number of active targetable mutations affects the complexity of the results, making their interpretation a challenge for many oncologists. In 2014, Gray et al. conducted a study, which evaluated cancer physicians’ ability of using multiplex tumor genomic testing and showed that many physicians lack confidence in interpreting complex genomic test results as well as in incorporating them into practice . Thus, we see great potential in establishing the combination of molecular diagnostic tests and a subsequent case discussion by a multidisciplinary molecular board team not only as a routine for cancer patients but also as a training platform and a knowledge-expanding approach for oncologists to help guide their decisions. However, precision oncology faces some challenges, which delay its widespread translation into clinical practice. Critics of the incorporation of NGS and similar methods into clinical practice express following concerns: First, the significant cost of molecular diagnostics and targeted drugs is still a great disadvantage. While prices of next-generation sequencing technologies are dropping from about $3 billion in the year 2000 and to $5000 today, the selection of molecular targeted agents is still enormously expensive . As the price of precision medicine is still rather high for most patients, it is now crucial to also evaluate its cost-effectiveness in order to support its translation into clinical practice, for example in the setting of clinical trials and research programs . Second, logistical problems causing limited access to targeted drugs and clinical trials for biomarker-positive patients represent another major problem. This is mainly due to the absence of reimbursement for drugs beyond their labelled indication. As a consequence, in order to receive the required, often off-label drug, patients need to be enrolled within active clinical trials or are required to cover the costs themselves or to file an application for reimbursement by the competent health insurance prior to treatment initiation. Clinical trials often have strict inclusion criteria and are, therefore, not easily accessible to many patients. As shown in the SAFIR01 trial, only a small number of patients benefit from personalized therapies mostly due to drug access problems. This problem could be solved by establishing a portfolio of early phase clinical basket trials or by early-access-programs . Recent studies suggest that the implementation of a MTB improves access to targeted therapy . As seen in our clinic, the early-access-program that we started in November 2019 enabled many patients with a PIK3CA mutation to derive benefit from the targeted drug alpelisib soon after its FDA approval in spring 2019 . Third, another major limitation is the testing of tumors from patients with late stage disease, which limits treatment options and hinders patients from receiving the recommended therapy or from enrolling in a clinical trial. As patients in an advanced cancer situation are often in an unstable health condition, obtaining biopsy material with a good quality of tissue is quite difficult. Our study had 14 (14.7%) technically unsuccessful molecular diagnostics. Moreover, the time between enrolling patients in the study, processing tumor samples, followed by the molecular diagnostics and the MTB case discussion is still rather lengthy in view of the fact that malignancies in late stages tend to evolve at unprecedented speed, while causing deterioration of the general condition and hindering patients from receiving particular therapies, one of the main reasons for the relative low number of implemented therapies (9 out of 34). In this study, molecular profiling and discussion were completed in a clinically reasonable time frame of approximately 4 weeks, which is comparable to the median turnaround times in other studies. Therefore, it is reasonable to expect that introducing molecular profiling at an earlier time point in a patient’s disease trajectory could improve the quality of molecular diagnostics and allow patients to benefit more from a multidisciplinary tailored MTB-based treatment advice. Fourth, another concern is that the current trend of identifying single variables and matching it with an appropriate targeted therapy may be irrelevant for some patients because of the heterogeneous landscape of their cancer. Disease variability among individual tumors causes patients with tumors of similar histology to respond differently to targeted therapies . For example, only 60% of lung cancer patients with the p.L858R mutation in the epidermal growth factor receptor gene (EGFR) respond to gefitinib, although all of them are carriers of the exact same mutation in the target gene, indicating that other, yet unknown genetic aberrations may influence the effect of targeted drugs and that the disease course is still unpredictable to a great extent . Fifth, the common use of medicines outside the approved label is controversial. Off-label drug use may represent a danger for patient safety in some cases, but it is sometimes justified from a clinical point of view. Four out of nine (44%) of the implemented recommended therapies in the study “The informative Patient” included off-label drugs; two of these patients (50%) experienced a clinical benefit with a partial response or stabilization lasting over 4 months, while having progressed under last standard treatment. There were several limitations to our study. First, despite a relatively high number of breast and gynecological cancer, the overall number of included patients remains low. Second, our patient cohort presented had a heterogeneous tumor type, making general conclusions relatively difficult. Third, the number of patients with implemented therapies is limited, due to deterioration of patients’ general condition or no access to the recommended targeted drug, as previously reported in other studies. Nevertheless, we do demonstrate feasibility of and patient benefit from a routine MTB at a large comprehensive cancer center. The landscape of molecular alterations in breast and gynecological cancers is heterogeneous. Advances in the quality and availability of molecular diagnostics and the number of targeted therapies increase rapidly, offering patients with advanced cancer a variety of new treatment options. MTBs try to bridge the gap in between molecular alterations and matching drugs in a structured manner. The primary objective of the present monocentric study was to estimate, in a real-world setting, the impact of interdisciplinary MTB case discussions for patients with breast and gynecological malignancies. Altogether, on the basis of individual molecular diagnostics, diagnostic and treatment recommendations were made for 45 patients (47.4% of all). Nine out of 34 patients received the recommended treatment. Four out of 9 patients responded with a PFSr > 1.3. Therefore, our results support the approach of matching specific drugs (in- and off-label) to particular genetic aberrations and demonstrate its relevance in breast and gynecological cancers for a small, but clinically relevant group of patients. By providing a multidisciplinary tailored-based treatment advice based on genetic tests, it is now possible for more patients with breast and gynecological malignancies to gain maximum clinical benefit and improve survival of patients with either advanced stage cancer or a rare tumor entity by applying personalized medicine. The MTB strategy, however, needs to be standardized and optimized in order to eliminate major logistical problems such as limited access to targeted agents (often off-label) and clinical trials, as well as patient referral at stage disease that are too late for a beneficial therapeutic intervention. |
Artificial intelligence and computational pathology | eb37b9c3-0aad-4626-8375-2006e589035b | 7811340 | Pathology[mh] | Artificial intelligence (AI) refers to the simulation of the human mind in computer systems that are programmed to think like humans and mimic their actions such as learning and problem-solving. AI should be able to perform tasks that normally require human intelligence, such as visual perception, decision-making, and communication. AI-based computational pathology as an emerging discipline has recently shown great promise to increase both the accuracy and availability of high-quality health care to patients in many medical fields. The primary forces and limitations in this field are: (1) a shortage of experienced pathologists and the limitation of global health care resources ; (2) the ever increasing amount of health data available, including digital images, omics, clinical records, and patient demographic information, being generated through the process of patient care ; (3) the increased complexity that is created in managing and integrating the data across different sources in order to maximize patient care; and (4) machine learning-based algorithms need to be efficiently harnessed in order to process and understand the big data . AI technologies have the ability to handle the gigantic quantity of data created throughout the patient care lifecycle to improve pathologic diagnosis, classification, prediction, and prognostication of diseases. The most important advantage of the computational pathology is to reduce errors in diagnosis and classification. The Camelyon Grand Challenge 2016 (CAMELYON16 challenge), is a worldwide machine learning-based program to evaluate new algorithms for the automated detection of cancer in hematoxylin and eosin (H&E)-stained whole-slide imaging (WSI), has achieved encouraging results with a 92.4% sensitivity in tumor detection rate. In contrast, a pathologist could only achieve 73.2% sensitivity . Computational pathology has the potential to transform the traditional core functions of pathology and not just growing sub-segments such as digital pathology, molecular pathology, and pathology informatics . Computational pathology aims to improve diagnostic accuracy, optimize patient care, and reduce costs by bringing global collaboration. As the rapid technological advancement drives individualized precision medicine , computational pathology is a critical factor in achieving this goal. The development of brightfield and fluorescent slide scanners made possible the virtualizing and digitalizing the whole glass slides . Digital pathology includes the process of digitizing histopathology, immunohistochemistry or cytology slides using whole-slide scanners as well as the interpretation, management, and analysis of these digitized whole-slide images using computational approaches. The digital data of the slides can be stored in a central cloud-based space allowing for remote access to the information for manual review by a pathologist or automated review by a data algorithm. It makes AI, a branch of computational science which generates the data algorithms, to be applied in pathology possible . Based on the degree of intelligence, AI can currently be divided into two major categories: weak AI and strong AI (Table ). Weak AI, also known as artificial narrow intelligence, refers to the classification of data based on a well-established statistic model that has already been trained to perform specific tasks . In contrast, strong AI, also known as artificial general intelligence (AGI), can create a system, which can function intelligently and independently by executing machine learning from any available normalized data. Generally, machine learning is an AI process to allow a computer system to automatically learn and improve from the data set by itself and to solve problems without being programmed during the process. Machine learning is an advanced branch of AGI using a large amount of initial data, training set, to build statistic algorithms to interpret and act on new data later on . At present, various machine learning-based approaches have been developed and tested in pathology to assist pathologic diagnosis using the basic morphology pattern such as cancer cells, cell nuclei, cell divisions, ducts, blood vessels, etc. . Deep learning (also known as deep structured learning) is a subfield of machine learning based on artificial neural networks (ANNs) in which the statistic models are established from input training data . Deep neural networks provide architectures for deep learning. The ANNs can perform its own determination as to whether its interpretation or prediction is correct, resembling a biological complex neural network of the human brain . ANNs are comprised of three functional layers of artificial neurons, known as “nodes”, which include an input layer, multiple hidden layers, and an output layer. The artificial neurons are connected to each other in the ANNs and the strength of their connections is known as “weights”. The connections between artificial neurons in the ANNs are assessed using statistic methods, including clustering algorithms, K-nearest neighbor, support vector machines (SVM), and logistic regressions . The involved artificial neurons, which are related to the output event, and their associated connections, which bear different “weight”, need to be trained by qualified big data set to achieve an optimized algorithm for specific tasks (Fig. ). The convolutional neural networks are a type of deep multilayer neural networks particularly designed for visual image. It employs convolutional kernels, a set of learnable filters, to build up a pooling layer that can effectively reduce the dimensions of the image data while still retaining its characteristics (Fig. ). By flattening an image, removing or reducing the dimensions, convolutional kernels act as a preprocess treatment that then allows for computer vision and machine vision models to process, analyze, and classify the digital images, or parts of the image, into known categories. With slide scanning technology getting faster and more reliable, a larger volume of WSI data becomes available to train and validate convolutional neural network models. In combination with clinical information, biomarkers, and multi-omics data, computational pathology will become part of the new standard of care . Computational pathology not only facilitates a more efficient pathology workflow, but also provides a more comprehensive and personalized view, enabling pathologists to address the progress of complex diseases for better patient care . Case selection Patients’ selection is the initial step to train the algorithm (Fig. ). Both training set and validation set must include all sample types or variants, which are related to the subject of diseases including stages, grades, histologic classification, complication, etc. to eliminate false-negative and false-positive scenarios. Still very much a machine driven process, algorithms have no way to recognize the variants that has not been included in the training set. The criteria for the samples and subsequent slide selection for the learning set need to be established by experienced pathologists alongside a computational team. Confounding variables have to be isolated and removed. For example, the patients with other medical conditions who may interfere with the outcome. In addition, inadequate slide preparation including blurred vision, over- or under-staining, air-bubbles, and folded tissue can produce inaccurate resulting algorithms. The comprehensive initial and follow-up clinical information, as well as laboratory results, should be collected and included. The more relevant the information included, the more accurate the resulting algorithm. Whole-slide imaging (WSI) Several slide scanning systems for whole-slide imaging have been approved by the US Food and Drug Administration (FDA) to be used in clinical settings (Table ) . The first FDA-approved Ultra-Fast Scanner, the Philips IntelliSite Pathology Solution (PIPS), has a resolution of 0.25 μm/pixel, scanning speed of 60 s for a 15 × 15-mm scan area and scanning capacity of 300 slides in one load . The Aperio AT2 DX System from Leica Biosystems has 400 slide capacity for brightfield and fluorescent slides . File sizes of digital images at applicable resolution vary depending on the scan area on the glass slides. In general, pathology images are tremendously large, in the range of 1–3 GB per image. Therefore, it requires a high-capacity and fast digital working computer. Furthermore, the number of slides needed to achieve a clinically accepted algorithm may vary by tissue type and diagnosis. Campanella et al. showed that at least 10,000 slides are necessary for training to reach a good performance. The authors also observed the discrepancy of the prediction between Leica Aperio and PIPS and found that brightness, contrast, and sharpness affect the prediction performance . Image analysis and automation For digital slide analysis, Senaras et al. described a novel deep-learning framework, called DeepFocus, which enables the automatic identification of blurry regions in digital slides for immediate re-scan in order to improve image quality for pathologists and image analysis algorithms. Janowczyk et al. presented an open-source tool called HistoQC to assess color histograms, brightness, and contrast of each slide and to identify cohort-level outliers (e.g., darker or lighter stain than other slides in the cohort). These methods play an essential role in the quality control of whole-slide images to standardize the quality of images in computational pathology. Due to improvements in various smart image-recognition algorithmic discriminators, based on high-capacity deep neural network models , the pathologist can be released from extensive manual annotations for each whole-slide images at the pixel level so that they can focus other parts of the clinical workflow. The patch-based whole-slide images (224 × 224 to 256 × 256) have been widely used in many machine learning domains to train classifiers for diagnostic or prognostic tasks. For example, Campanella et al. employed multiple instance learning (MIL) approaches with “bag” and “instance” based on convolutional neural networks and recurrent neural networks to classify the prostate cancer images of H&E slides. Kapil et al. applied deep semi-supervised architecture and auxiliary classifier generative adversarial networks, including one generator network and one discriminator network, to automatically analyze the PD-L1 expression in immunohistochemistry slide of late stage non-small cell lung cancer needle biopsies. Barker et al. revealed an elastic net linear regression model and weighted voting system to differentiate glioblastoma multiforme and lower-grade glioma with an accuracy of 93.1%. Patients’ selection is the initial step to train the algorithm (Fig. ). Both training set and validation set must include all sample types or variants, which are related to the subject of diseases including stages, grades, histologic classification, complication, etc. to eliminate false-negative and false-positive scenarios. Still very much a machine driven process, algorithms have no way to recognize the variants that has not been included in the training set. The criteria for the samples and subsequent slide selection for the learning set need to be established by experienced pathologists alongside a computational team. Confounding variables have to be isolated and removed. For example, the patients with other medical conditions who may interfere with the outcome. In addition, inadequate slide preparation including blurred vision, over- or under-staining, air-bubbles, and folded tissue can produce inaccurate resulting algorithms. The comprehensive initial and follow-up clinical information, as well as laboratory results, should be collected and included. The more relevant the information included, the more accurate the resulting algorithm. Several slide scanning systems for whole-slide imaging have been approved by the US Food and Drug Administration (FDA) to be used in clinical settings (Table ) . The first FDA-approved Ultra-Fast Scanner, the Philips IntelliSite Pathology Solution (PIPS), has a resolution of 0.25 μm/pixel, scanning speed of 60 s for a 15 × 15-mm scan area and scanning capacity of 300 slides in one load . The Aperio AT2 DX System from Leica Biosystems has 400 slide capacity for brightfield and fluorescent slides . File sizes of digital images at applicable resolution vary depending on the scan area on the glass slides. In general, pathology images are tremendously large, in the range of 1–3 GB per image. Therefore, it requires a high-capacity and fast digital working computer. Furthermore, the number of slides needed to achieve a clinically accepted algorithm may vary by tissue type and diagnosis. Campanella et al. showed that at least 10,000 slides are necessary for training to reach a good performance. The authors also observed the discrepancy of the prediction between Leica Aperio and PIPS and found that brightness, contrast, and sharpness affect the prediction performance . For digital slide analysis, Senaras et al. described a novel deep-learning framework, called DeepFocus, which enables the automatic identification of blurry regions in digital slides for immediate re-scan in order to improve image quality for pathologists and image analysis algorithms. Janowczyk et al. presented an open-source tool called HistoQC to assess color histograms, brightness, and contrast of each slide and to identify cohort-level outliers (e.g., darker or lighter stain than other slides in the cohort). These methods play an essential role in the quality control of whole-slide images to standardize the quality of images in computational pathology. Due to improvements in various smart image-recognition algorithmic discriminators, based on high-capacity deep neural network models , the pathologist can be released from extensive manual annotations for each whole-slide images at the pixel level so that they can focus other parts of the clinical workflow. The patch-based whole-slide images (224 × 224 to 256 × 256) have been widely used in many machine learning domains to train classifiers for diagnostic or prognostic tasks. For example, Campanella et al. employed multiple instance learning (MIL) approaches with “bag” and “instance” based on convolutional neural networks and recurrent neural networks to classify the prostate cancer images of H&E slides. Kapil et al. applied deep semi-supervised architecture and auxiliary classifier generative adversarial networks, including one generator network and one discriminator network, to automatically analyze the PD-L1 expression in immunohistochemistry slide of late stage non-small cell lung cancer needle biopsies. Barker et al. revealed an elastic net linear regression model and weighted voting system to differentiate glioblastoma multiforme and lower-grade glioma with an accuracy of 93.1%. Pathologist-centered medical system Although most AI research is still focused on the detection and grading of tumors in digital pathology and radiology, computational pathology is not limited to the detection of a morphological pattern. It can also contribute to the complex process of analysis and judgment using demographic information, digital pathology, -omics, and laboratory results . Therefore, AI has the potential to contribute to nearly all aspects of the clinical workflow, from more accurate diagnosis to prognosis, and individualized treatment. Multiple sources of clinical data are incorporated into mathematic models to generate diagnostic inferences and predictions, to enable physicians, patients, and laboratory personnel to make the best possible medical decisions . For example, deep neural networks have been applied to automated biomarker assessment of breast tumor images, such as HER2, ER, and Ki67 . Hamidinekoo et al. created a novel convolutional neural network-based mammography–histology–phenotype–linking–model to connect and map the features and phenotypes between mammographic abnormalities and their histopathological representation. Mobadersany et al. developed a genomic survival convolutional neural network model to integrate information from both histology images and genomic data to predict time-to-event outcomes and demonstrated the prediction accuracy surpassed the current clinical paradigm for predicting the overall survival of patients diagnosed with glioma. As electronic health record (EHR) systems enable us to collect medical data such as age, race, gender, social history, and clinic history, applying these data as independent factors of a particular disease to an appropriate mathematic algorithm becomes feasible . These integrated data allow pathologists to gain deeper insights and to switch between different algorithms of treatment at different stages of the disease and/or for different statuses of the patient. As the health-related apps on mobile devices and smart personal trackers become popular, direct access to continuous real-time health information, such as temperature, heart rate, respiratory rate, electrocardiogram, body mass index, blood glucose, and blood oxygen content, can be recorded into individual health data. These data can then be incorporated into the EHR and laboratory information systems (LIS) to reintegrate into a virtualized and digitalized person, which was not possible previously and was beyond what the human brain alone can accomplish (Fig. ). This new system of data-driven care requires the pathology, as a cornerstone of modern medicine, to integrate data, algorithms, and analytics to deliver high-quality and efficient care. The combination of computational pathology and big data mining offers the potential to create a revolutionary way of practicing evidence-based, personalized medicine. Global pathology service model Three essential advancements happened in recent years: the possibility to store a great amount of data from network-attached storage to cloud storage, the growing speed of network from WIFI-6 to 5 G, and high-performance central processing unit (CPU) and graphics processing unit. These technological improvements not only enhance people’s daily life, but also have a great impact on medicine, especially digital and computational pathology (Fig. ). Together with the surging development of network and information technology, these technologic improvements allow for the centralization of medical and computing resources—with the benefit of larger sample data volume for optimization of algorithms. Furthermore, the central cloud-based AI laboratory and data bank of digital and computational pathology make the global network of computational pathology possible. In local laboratories or centralized scanning centers, histology slides can be converted to whole-slide images and numerical data. These data can then be transferred to the central laboratory together with EHR data and multi-omics data for further analysis (Fig. ). Patients in different geographic areas around the world can benefit from more efficient and effective diagnosis, treatment, and follow-up. In the meantime, pathologists are able to access the information they need to care for patients or to collaborate with specialists anytime and anywhere. Deep-learning platforms have the potential to facilitate the discovery of more complicated or subtle connections and to help pathologists make the best clinical decisions to meet every patient’s needs. Although most AI research is still focused on the detection and grading of tumors in digital pathology and radiology, computational pathology is not limited to the detection of a morphological pattern. It can also contribute to the complex process of analysis and judgment using demographic information, digital pathology, -omics, and laboratory results . Therefore, AI has the potential to contribute to nearly all aspects of the clinical workflow, from more accurate diagnosis to prognosis, and individualized treatment. Multiple sources of clinical data are incorporated into mathematic models to generate diagnostic inferences and predictions, to enable physicians, patients, and laboratory personnel to make the best possible medical decisions . For example, deep neural networks have been applied to automated biomarker assessment of breast tumor images, such as HER2, ER, and Ki67 . Hamidinekoo et al. created a novel convolutional neural network-based mammography–histology–phenotype–linking–model to connect and map the features and phenotypes between mammographic abnormalities and their histopathological representation. Mobadersany et al. developed a genomic survival convolutional neural network model to integrate information from both histology images and genomic data to predict time-to-event outcomes and demonstrated the prediction accuracy surpassed the current clinical paradigm for predicting the overall survival of patients diagnosed with glioma. As electronic health record (EHR) systems enable us to collect medical data such as age, race, gender, social history, and clinic history, applying these data as independent factors of a particular disease to an appropriate mathematic algorithm becomes feasible . These integrated data allow pathologists to gain deeper insights and to switch between different algorithms of treatment at different stages of the disease and/or for different statuses of the patient. As the health-related apps on mobile devices and smart personal trackers become popular, direct access to continuous real-time health information, such as temperature, heart rate, respiratory rate, electrocardiogram, body mass index, blood glucose, and blood oxygen content, can be recorded into individual health data. These data can then be incorporated into the EHR and laboratory information systems (LIS) to reintegrate into a virtualized and digitalized person, which was not possible previously and was beyond what the human brain alone can accomplish (Fig. ). This new system of data-driven care requires the pathology, as a cornerstone of modern medicine, to integrate data, algorithms, and analytics to deliver high-quality and efficient care. The combination of computational pathology and big data mining offers the potential to create a revolutionary way of practicing evidence-based, personalized medicine. Three essential advancements happened in recent years: the possibility to store a great amount of data from network-attached storage to cloud storage, the growing speed of network from WIFI-6 to 5 G, and high-performance central processing unit (CPU) and graphics processing unit. These technological improvements not only enhance people’s daily life, but also have a great impact on medicine, especially digital and computational pathology (Fig. ). Together with the surging development of network and information technology, these technologic improvements allow for the centralization of medical and computing resources—with the benefit of larger sample data volume for optimization of algorithms. Furthermore, the central cloud-based AI laboratory and data bank of digital and computational pathology make the global network of computational pathology possible. In local laboratories or centralized scanning centers, histology slides can be converted to whole-slide images and numerical data. These data can then be transferred to the central laboratory together with EHR data and multi-omics data for further analysis (Fig. ). Patients in different geographic areas around the world can benefit from more efficient and effective diagnosis, treatment, and follow-up. In the meantime, pathologists are able to access the information they need to care for patients or to collaborate with specialists anytime and anywhere. Deep-learning platforms have the potential to facilitate the discovery of more complicated or subtle connections and to help pathologists make the best clinical decisions to meet every patient’s needs. Increasingly, AI detection is being applied to different subspecialties with various sample types . Early reports on accuracy have shown to be promising and that the AI-assisted systems have the potential to classify accurately at an unprecedented scale and lay the foundation for the deployment of computational pathology in nearly all subspecialties . Prostate cancer Campanella et al. validated a high-capacity deep neural network-based algorithm to analyze image classification and categorization of 44,732 whole-slide images across three different cancer types, including prostate cancer, basal cell carcinoma, and breast cancer metastases to axillary lymph nodes. In terms of whole-slide images, they found that ×5 magnification has higher accuracy. They trained a statistic model with MIL-based tile classifier for each tissue type and achieved area under receiver operating curve (AUC) above 0.98 for all cancer types. Its clinical application would allow pathologists to exclude 65–75% of slides while retaining 100% sensitivity . Wildeboer et al. discussed deep-learning techniques based on different imaging sources including magnetic resonance imaging, echogenicity in ultrasound imaging, and radio density in computed tomography as computer-aided diagnostic tools for prostate cancer. They found that the algorithm of convolutional neural network architecture performed equal or better than SVM or random forest classifiers in machine learning . Colorectal cancer Korbar et al. developed multiple deep-learning algorithms, modified version of a residual network architecture, which can accurately classify whole-slide images of five types of colorectal polyps, including hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous polyps. Among 2074 of images, 90% of them were used for model training and the remaining 10% of images were assigned to the validation set. The overall accuracy for classification of colorectal polyps was 93% (confidence interval (CI) 95%, 89.0–95.9%) . Bychkov et al. combined convolutional neural networks and recurrent neural network architectures to predict colorectal cancer outcomes based on tissue microarray (TMA) samples from 420 colorectal cancer patients. Their results show that the AUC of deep neural network-based outcome prediction was 0.69 (hazard ratio, 2.3; CI 95%, 1.79–3.03). For comparison, pathology experts performed inferiorly on both TMA samples (HR, 1.67; CI 95%, 1.28–2.19; AUC, 0.58) and whole-slide level (HR, 1.65; CI 95%, 1.30–2.15; AUC, 0.57), which implied that deep neural networks could extract more prognostic information from the tissue morphology of colorectal cancer than an experienced pathologist . Breast cancer Wang et al., the team of winner of competitions in the CAMELYON16 challenge, used input 256 × 256 pixel patches from positive and negative regions of the whole-slide images of breast sentinel lymph nodes to train various classification models including GoogLeNet Patch, AlexNet, VGG16, and FaceNet. The patch classification accuracy is 98.4, 92.1, 97.9, and 96.8% separately. Among the algorithms, GoogLeNet has the best performance and is generally faster and more stable, which achieved AUC of 0.925 for whole-slide images classification. With the assistance of deep-learning system, the accuracy of pathologist’s diagnoses improved significantly as the AUC increased from 0.966 to 0.995, representing ~85% reduction of human error rate . Furthermore, the open resource of a data set of annotated whole-slide images for CAMELYON16 and CAMELYON17 challenges enable testing of new machine learning and image analysis strategies for digital pathology . Cytology Martin et al. applied convolutional neural networks for classifying cervical cytology images into five diagnostic categories, including negative for intraepithelial lesion or malignancy, atypical squamous cells of undetermined significance, low-grade squamous intraepithelial lesion, atypical squamous cells cannot exclude how-grade squamous intraepithelial lesion and high-grade squamous intraepithelial lesion, and achieved accuracies of 56, 36, 72, 17, and 86% separately, which implies convolutional neural networks are able to learn cytological features . In another cytopathology study, the authors used morphometric algorithm and semantic segmentation network based on VGG-19 to classify urine cytology whole-slide images according to Paris System for Urine Cytopathology and achieved a sensitivity of 77%, false-positive rate of 30% and AUC of 0.8 . COVID-19 During the outbreak of COVID-19, telemedicine and computer-aided medicine are rapidly entering the market in many countries. Highly contagious nature, systemic risks, and social isolation brought unexpected challenges to traditional medicine. Applying AI-based computer-aided medicine along with clinical data from EHR, including individuals’ clinical risk factors of human-to-human interactions and a variety of diverse social data, may provide a quick control of this public health emergency with a better quality and safety . Several AI companies have been working on products to address the COVID-19 pandemic. For example, JLK Inspection, Korea ( http://www.jlk-inspection.com/#/medical/main ) is integrating the reverse transcription-polymerase chain reaction (RT-PCR) results, imaging tests, and their universal AI platform, AIHuB, to provide COVID-19 diagnosis. Persivia, Massachusetts ( https://persivia.com/covid-19-detection/ ) announced a new surveillance module based on their Soliton AI engine to identify and alert patients who are presumed positive for COVID-19. Biofourmis, Massachusetts ( https://www.biofourmis.com/ ) developed an analytic platform called Biovitals Sentinel, which provides 24/7 remote monitor to identify early clinical deterioration and enable earlier interventions. Schaar et al. described that machine learning could significantly enhance both the efficiency and effectiveness of randomized clinical trials for COVID-19. It has the capability to speed up recruiting subjects from identifiable subgroups and assigning subjects to treatment or control groups as well as significantly reducing error and requiring many fewer patients . Campanella et al. validated a high-capacity deep neural network-based algorithm to analyze image classification and categorization of 44,732 whole-slide images across three different cancer types, including prostate cancer, basal cell carcinoma, and breast cancer metastases to axillary lymph nodes. In terms of whole-slide images, they found that ×5 magnification has higher accuracy. They trained a statistic model with MIL-based tile classifier for each tissue type and achieved area under receiver operating curve (AUC) above 0.98 for all cancer types. Its clinical application would allow pathologists to exclude 65–75% of slides while retaining 100% sensitivity . Wildeboer et al. discussed deep-learning techniques based on different imaging sources including magnetic resonance imaging, echogenicity in ultrasound imaging, and radio density in computed tomography as computer-aided diagnostic tools for prostate cancer. They found that the algorithm of convolutional neural network architecture performed equal or better than SVM or random forest classifiers in machine learning . Korbar et al. developed multiple deep-learning algorithms, modified version of a residual network architecture, which can accurately classify whole-slide images of five types of colorectal polyps, including hyperplastic, sessile serrated, traditional serrated, tubular, and tubulovillous/villous polyps. Among 2074 of images, 90% of them were used for model training and the remaining 10% of images were assigned to the validation set. The overall accuracy for classification of colorectal polyps was 93% (confidence interval (CI) 95%, 89.0–95.9%) . Bychkov et al. combined convolutional neural networks and recurrent neural network architectures to predict colorectal cancer outcomes based on tissue microarray (TMA) samples from 420 colorectal cancer patients. Their results show that the AUC of deep neural network-based outcome prediction was 0.69 (hazard ratio, 2.3; CI 95%, 1.79–3.03). For comparison, pathology experts performed inferiorly on both TMA samples (HR, 1.67; CI 95%, 1.28–2.19; AUC, 0.58) and whole-slide level (HR, 1.65; CI 95%, 1.30–2.15; AUC, 0.57), which implied that deep neural networks could extract more prognostic information from the tissue morphology of colorectal cancer than an experienced pathologist . Wang et al., the team of winner of competitions in the CAMELYON16 challenge, used input 256 × 256 pixel patches from positive and negative regions of the whole-slide images of breast sentinel lymph nodes to train various classification models including GoogLeNet Patch, AlexNet, VGG16, and FaceNet. The patch classification accuracy is 98.4, 92.1, 97.9, and 96.8% separately. Among the algorithms, GoogLeNet has the best performance and is generally faster and more stable, which achieved AUC of 0.925 for whole-slide images classification. With the assistance of deep-learning system, the accuracy of pathologist’s diagnoses improved significantly as the AUC increased from 0.966 to 0.995, representing ~85% reduction of human error rate . Furthermore, the open resource of a data set of annotated whole-slide images for CAMELYON16 and CAMELYON17 challenges enable testing of new machine learning and image analysis strategies for digital pathology . Martin et al. applied convolutional neural networks for classifying cervical cytology images into five diagnostic categories, including negative for intraepithelial lesion or malignancy, atypical squamous cells of undetermined significance, low-grade squamous intraepithelial lesion, atypical squamous cells cannot exclude how-grade squamous intraepithelial lesion and high-grade squamous intraepithelial lesion, and achieved accuracies of 56, 36, 72, 17, and 86% separately, which implies convolutional neural networks are able to learn cytological features . In another cytopathology study, the authors used morphometric algorithm and semantic segmentation network based on VGG-19 to classify urine cytology whole-slide images according to Paris System for Urine Cytopathology and achieved a sensitivity of 77%, false-positive rate of 30% and AUC of 0.8 . During the outbreak of COVID-19, telemedicine and computer-aided medicine are rapidly entering the market in many countries. Highly contagious nature, systemic risks, and social isolation brought unexpected challenges to traditional medicine. Applying AI-based computer-aided medicine along with clinical data from EHR, including individuals’ clinical risk factors of human-to-human interactions and a variety of diverse social data, may provide a quick control of this public health emergency with a better quality and safety . Several AI companies have been working on products to address the COVID-19 pandemic. For example, JLK Inspection, Korea ( http://www.jlk-inspection.com/#/medical/main ) is integrating the reverse transcription-polymerase chain reaction (RT-PCR) results, imaging tests, and their universal AI platform, AIHuB, to provide COVID-19 diagnosis. Persivia, Massachusetts ( https://persivia.com/covid-19-detection/ ) announced a new surveillance module based on their Soliton AI engine to identify and alert patients who are presumed positive for COVID-19. Biofourmis, Massachusetts ( https://www.biofourmis.com/ ) developed an analytic platform called Biovitals Sentinel, which provides 24/7 remote monitor to identify early clinical deterioration and enable earlier interventions. Schaar et al. described that machine learning could significantly enhance both the efficiency and effectiveness of randomized clinical trials for COVID-19. It has the capability to speed up recruiting subjects from identifiable subgroups and assigning subjects to treatment or control groups as well as significantly reducing error and requiring many fewer patients . Although machine learning has produced promising results and provided many benefits in computational pathology, the following limitations need to be addressed before deep machine learning can be implemented in the clinical setting. Standardization and normalization The successful adaptation of whole-slide images in digital pathology heavily depends on each step of high-quality pathology slide preparation, including embedding, cutting, staining, and scanning. Folded tissue section during cutting, staining variation and the presence of air bubble during covering slide as well as different settings of brightness, intensity disparity, average color, and boundary intensity during scanning can cause unreliable raw data and produce inaccurate results . The protocols and systemic quality controls need to be standardized to reduce the system errors and random errors resulted from different instruments since a single noise in big data can cause misclassification and change the slide prediction, possibly resulting in a large number of false positives or negatives. The data used to generate algorithms are analyzed by different models by different developers. The larger the accumulated data, the more accurate algorithms, especially for rear diseases and specific small populations. A standard data format and normalization method of data analysis should be engaged to merge consecutive data sets from different resources and train them into one algorithm since different data resources may cause variation in classification accuracy in practice. The digital imaging and communications in medicine (DICOM) developed standards for medical images including radiology ( www.dicomstandard.org ). It defines the formats for medical images that can be exchanged with the data and quality necessary for clinical use. DICOM Standard now provides support for WSI, by incorporating a way to handle tiled large images as multi-frame images and multiple images at varying resolutions ( http://dicom.nema.org/Dicom/DICOMWSI/ ). The role in the computational pathology Computational pathology is not only important in medical research, but also needed to address clinical questions in practice . To achieve this goal, a team of experts in different fields is needed to participate in computational pathology projects, including data scientists such as statisticians and bio-informaticians for algorithm design and architect as well as engineer for the construction of physical environments and maintaining hardware (Fig. ). Among them, the pathologists play a critical role to introduce medical questions and clinical applications to the developer team and to trigger the downstream industry development . The new role of the pathologist in computational pathology not only requires solid clinical knowledge and experience, but also requires the knowledge of statistical analysis and data mining to bridge the gap between clinical medicine and AI so that when a disease suddenly happens, or a new biomarker is discovered, the pathologist can quickly react to the opportunity to either create a new algorithm or optimize an existing algorithm to assist the clinician . Furthermore, pathologists who understand the potential problems that may occur during the data analysis can address the clinical problems clearly with computational thinking. Good communication among the team members can help to design a more efficient algorithm because the algorithms with different coding approaches will consume different amount of computing resources and time, especially for big data, although they generate the same end result. Furthermore, AI pathology provides excellent tools for experimental pathology by the integration of morphology at organ, histology, cell, and organelle levels with molecular details of omics data. Hardware limitations The accuracy of applications in the computational pathology heavily depends on large amounts of data, reliable hardware and software, and a supportive network environment. Large image file size (around 3 GB per slide scan) requires significant big storage space with backup capability in both local and cloud. In addition, deep machine learning solutions, especially when applying analysis of pathology images, heavily depend on graphics processing unit, which is a chip on the computer’s graphics card for rapidly manipulating graphics and processing images . A powerful graphics processing unit can provide significant performance enhancement alongside the CPU to boost computing capacity and reduce turn-around time. For implementation of either data transmission or cloud-based image processing, data bandwidth of both intranet and internet becomes a bottleneck, which limits the speed of upload and download . Only if all these related elements in a network have been developed and evolved into a robust system (Fig. ), computational pathology can move forward to assist resolving more complex and multifaceted medical and clinical questions and research tasks. Finally, the clearance as medical devices (510k) by FDA is critical to ensure the clinical reliability and the acceptance by pathology community. Ethics In the new era of computation-driven decision-making processes based on AI and machine learning, computational pathology must involve in more complicated interactions of massive information from clinical history, omics data, living environment to social habits . It is very likely that the experts involved in these decision-making processes will no longer be exclusively pathologists. Instead, the decision-making panel will include other experts such as data statisticians and bio-informaticians, which may raise ethical concerns . A continuous massive, sensitive health data transfer among clinics, laboratories, and data banks can enable higher precision medicine, but at the same time increases the security vulnerability. Policies around the strict protection of patient privacy and personal data creates an obstacle for computational pathology to access the health databases need to create more comprehensive training data sets. General Data Protection Regulation was enacted in May 2018 in Europe to impose new responsibilities on organizations who process the data of European Union citizens for scientific research . This concept highlights the proportionate approach to regulate computational pathology-related security and ethical issues while not limiting innovation unduly, which is difficult but critical. The successful adaptation of whole-slide images in digital pathology heavily depends on each step of high-quality pathology slide preparation, including embedding, cutting, staining, and scanning. Folded tissue section during cutting, staining variation and the presence of air bubble during covering slide as well as different settings of brightness, intensity disparity, average color, and boundary intensity during scanning can cause unreliable raw data and produce inaccurate results . The protocols and systemic quality controls need to be standardized to reduce the system errors and random errors resulted from different instruments since a single noise in big data can cause misclassification and change the slide prediction, possibly resulting in a large number of false positives or negatives. The data used to generate algorithms are analyzed by different models by different developers. The larger the accumulated data, the more accurate algorithms, especially for rear diseases and specific small populations. A standard data format and normalization method of data analysis should be engaged to merge consecutive data sets from different resources and train them into one algorithm since different data resources may cause variation in classification accuracy in practice. The digital imaging and communications in medicine (DICOM) developed standards for medical images including radiology ( www.dicomstandard.org ). It defines the formats for medical images that can be exchanged with the data and quality necessary for clinical use. DICOM Standard now provides support for WSI, by incorporating a way to handle tiled large images as multi-frame images and multiple images at varying resolutions ( http://dicom.nema.org/Dicom/DICOMWSI/ ). Computational pathology is not only important in medical research, but also needed to address clinical questions in practice . To achieve this goal, a team of experts in different fields is needed to participate in computational pathology projects, including data scientists such as statisticians and bio-informaticians for algorithm design and architect as well as engineer for the construction of physical environments and maintaining hardware (Fig. ). Among them, the pathologists play a critical role to introduce medical questions and clinical applications to the developer team and to trigger the downstream industry development . The new role of the pathologist in computational pathology not only requires solid clinical knowledge and experience, but also requires the knowledge of statistical analysis and data mining to bridge the gap between clinical medicine and AI so that when a disease suddenly happens, or a new biomarker is discovered, the pathologist can quickly react to the opportunity to either create a new algorithm or optimize an existing algorithm to assist the clinician . Furthermore, pathologists who understand the potential problems that may occur during the data analysis can address the clinical problems clearly with computational thinking. Good communication among the team members can help to design a more efficient algorithm because the algorithms with different coding approaches will consume different amount of computing resources and time, especially for big data, although they generate the same end result. Furthermore, AI pathology provides excellent tools for experimental pathology by the integration of morphology at organ, histology, cell, and organelle levels with molecular details of omics data. The accuracy of applications in the computational pathology heavily depends on large amounts of data, reliable hardware and software, and a supportive network environment. Large image file size (around 3 GB per slide scan) requires significant big storage space with backup capability in both local and cloud. In addition, deep machine learning solutions, especially when applying analysis of pathology images, heavily depend on graphics processing unit, which is a chip on the computer’s graphics card for rapidly manipulating graphics and processing images . A powerful graphics processing unit can provide significant performance enhancement alongside the CPU to boost computing capacity and reduce turn-around time. For implementation of either data transmission or cloud-based image processing, data bandwidth of both intranet and internet becomes a bottleneck, which limits the speed of upload and download . Only if all these related elements in a network have been developed and evolved into a robust system (Fig. ), computational pathology can move forward to assist resolving more complex and multifaceted medical and clinical questions and research tasks. Finally, the clearance as medical devices (510k) by FDA is critical to ensure the clinical reliability and the acceptance by pathology community. In the new era of computation-driven decision-making processes based on AI and machine learning, computational pathology must involve in more complicated interactions of massive information from clinical history, omics data, living environment to social habits . It is very likely that the experts involved in these decision-making processes will no longer be exclusively pathologists. Instead, the decision-making panel will include other experts such as data statisticians and bio-informaticians, which may raise ethical concerns . A continuous massive, sensitive health data transfer among clinics, laboratories, and data banks can enable higher precision medicine, but at the same time increases the security vulnerability. Policies around the strict protection of patient privacy and personal data creates an obstacle for computational pathology to access the health databases need to create more comprehensive training data sets. General Data Protection Regulation was enacted in May 2018 in Europe to impose new responsibilities on organizations who process the data of European Union citizens for scientific research . This concept highlights the proportionate approach to regulate computational pathology-related security and ethical issues while not limiting innovation unduly, which is difficult but critical. Technological innovation in health care is growing at an increasingly fast pace and has been integrated into both our daily lives, such as smart healthy tracker, and diagnostic algorithm in medical practice . With the rapid development of digital pathology, molecular pathology, and informatics pathology, computational pathology is increasingly involved in many subspecialties such as pulmonary, renal, gastrointestinal, neurology, and gynecology pathology. We believe the initial phase of AI will start with specific tasks such as the diagnosis of a particular cancer and classification of tissue types, which require limited and simple criteria . For example, the common subtypes and variants of benign and malignant neoplasm in prostate should be included in the training and validation to ensure the feasibility of daily pathology practice. As a result of more data collection and more powerful computing capacity over time, the clinical applications of AI will be broader and the number of nonspecific cases in the gray zone or with red flags classified by AI for manual review will be decreased. The growing medical data, including genomics, proteomics, informatics, and whole-slide images , is expected to integrate together to become a data-rich pathomics and lead to rapid development and prosperity of an AI-assisted computational pathology. Although many challenges remain, the computational pathology with the deployment of digital pathology technology and statistic algorithm will continue to improve clinical workflows and collaboration among pathologist and other members in the patient care team. The improved infrastructure of the network environment, the enhanced computing capacity, and broad integration of informatics have ushered in new horizons for both computational pathology and collaborative pattern, which make data travel and cloud-based central laboratory and data bank to deliver better care for patients at lower costs possible. In the new era of deep learning-assisted pathology, the data banking, integration, and cloud laboratory are becoming an essential part of daily practice of pathology. Furthermore, the pathologists, data scientists, and industry are starting to incorporate the genomics, proteomics, bioinformatics, and computer algorithms into a large amount of complex clinical information. Through this process, the computational pathology can contribute valuable insights to the diagnosis, prognosis, and treatment of disease ultimately. Although many technical and ethical challenges need to be addressed, computational pathology as a synergistic system will lead to a boosting workflow, enabling clinical teams to share and analyze image data in a broader platform. Currently, deep learning has been applied to solve more and more specialized tasks in medicine. Several studies discussed above showed that algorithm assistance has the potential to not only improve the sensitivity and accuracy of the diagnoses but also improve turn-around time. Moreover, according to Sarwar’s et al. study, around 75% of pathologists across 59 countries in the world are interested and excited about using AI as a diagnostic tool. Finally, despite the challenges and obstacles, the potential of computational pathology which will change and improve the current health care system is promising and exciting. |
Nasal Reshaping Using Barbed Threads Combined With Hyaluronic Acid Filler and Botulinum Toxin A | 61bb4884-7565-4d6d-859e-75cdc9757d4a | 11816005 | Surgical Procedures, Operative[mh] | Introduction The nose, as a central feature of the face, plays a pivotal role in defining facial aesthetics. Key anatomical elements such as the nasal dorsum, tip, alar base, and internal nasal valve collectively contribute to its overall appearance . Consequently, rhinoplasty is among the most performed procedures worldwide and remains a significant topic for scientific discussions, hands‐on courses, and innovative surgical laboratories . Despite significant advancements in nasal surgical techniques, the extended surgical downtime remains a major limitation for patients. As a result, nonsurgical rhinoplasty has gained considerable appeal among both aesthetic physicians and patients over the past two decades. Nonsurgical rhinoplasty is being evaluated based on procedure's safety, significance of aesthetic improvement, as well as its convenience as an in‐office treatment . The development of new hyaluronic acid dermal fillers with favorable efficacy and safety profiles has significantly increased the popularity of nonsurgical rhinoplasty (NSR) among patients seeking aesthetic enhancement of their nose . Fillers are employed to straighten the nasal dorsum, correct asymmetries, and achieve tip elevation. Advances in techniques have further allowed fillers to support the nasal tip, elevate the lateral alar regions, and widen the internal nasal valve . Another product for noninvasive rhinoplasty is botulinum toxin type A, which is used to reduce the appearance of nasal “bunny lines” caused by muscle activity, decrease nasal width during smiling by reducing muscle activity, and improve the appearance of a droopy nasal tip when injected into the depressor septi nasi muscle . Among the nonsurgical methods, barbed lifting threads have emerged as valid and effective aesthetic tools in clinical practice . Their use across various facial areas has advanced significantly, driven by improvements in both manufacturing techniques and clinical practices . Manufacturers have developed long‐lasting absorbable threads that mitigate the complications associated with permanent threads while providing extended clinical benefits . These threads have also expanded their use to nasal reshaping, offering a broader range of indications over time. Barbed threads not only provide lifting effects but also exhibit volumizing properties, enhancing their utility in nonsurgical rhinoplasty. They are utilized for nasal tip lifting, nasal dorsal narrowing, alar narrowing, and the correction of dorsal irregularities . The aim of this study was to evaluate the clinical efficacy and safety of various nonsurgical rhinoplasty techniques, including the use of hyaluronic acid (HA) filler, botulinum toxin type A injections, and barbed lifting threads, in patients seeking nasal reshaping. The study compared the outcomes of these treatments, assessing their impact on nasal aesthetics and patient satisfaction, as well as adverse effects. Specifically, the study aimed to determine the most effective nonsurgical approach for achieving desired aesthetic results with minimal complications and downtime. Materials and Methods 2.1 Patient Selection A total of 85 patients were selected for this study between March 2022 and July 2022. Patients were considered eligible for the study if they had at least one of the following indications: prominent nasal hump, long nose with lack of tip support, hyperactive depressor septi, and lack of nasal tip projection. Subjects were not included if they had previously operated noses, previously treated noses with fillers or threads, or any autoimmune disease. Patients were offered treatment options of nasal thread lifting, nasal fillers, and botulinum toxin injections. 2.2 Treatment Groups Patients were offered two treatment options: Nasal threads followed by botulinum toxin type A (BTX) injections with or without HA filler 2 weeks later. HA filler with BTX injections. Patients were divided into three groups: Group 1 (“Filler + BTX”): HA filler and BTX injections ( n = 63). Group 2 (“Threads + BTX”): Threads followed by BTX injections 2 weeks later ( n = 9). Group 3 (“Threads + Filler + BTX”): Threads followed by BTX injections and HA filler 2 weeks later ( n = 13). 2.3 Procedure Details 2.3.1 Threads Treatment The Sole Rhinoplasty kit by Aptos was used. It comprised five P (LA/CL) multidirectional barbed threads (USP 2/0, EP 3), each 120 mm in length. Additionally, it included five round tip hollow needles (20 G × 120 mm, straight), one round tip hollow needle (23 G × 80 mm, straight), one lancet point needle (18 G × 40 mm, straight), one lancet point needle (30 G × 13 mm, straight), and a removable needle attachment. Threads were inserted from the nasal tip through a puncture site made by an 18 G needle 2 mm below the nasal tip. Each thread was inserted and covered the length twice between the nasion superiorly and the nasal crest inferiorly (Figure ). 2.3.2 HA Filler Restylane Lyft hyaluronic acid filler was utilized in all cases. In Group 1 (“Filler + BTX”), subjects received HA filler on the first treatment day and in Group 3 (“Threads + Filler + BTX”), 2 weeks after threads administration. In Group 1 (“Filler + BTX”), HA filler was applied to straighten the nasal dorsum and lift the tip through an intercolumellar injection and a tip bolus. The injection was performed using a 27 or 29 G needle, which was perpendicularly inserted over the nasion. Subsequently, the injection was carried out using a 25 G cannula introduced 2 mm below the nasal tip in the subcutaneous plane dorsally and between the medial edges of the lower lateral cartilages inferiorly. The volume of filler injected ranged from 0.5 to 1 mL. In Group 3 (“Threads + Filler + BTX”), HA filler was administered 2 weeks after thread insertion. The dorsum was straightened using a 27 or 29 G needle, perpendicularly injected over the nasion. Additionally, the nasal tip was treated using a 25 G cannula to create a tip elevation by injecting up to 0.05 mL subdermally. 2.3.3 Botulinum Toxin A Injection For both groups 1 and 2, 4 IU of BTX (Allergan) were injected at the lower edge of the columella to reduce the activity of the depressor septi. In Group 1 (“Filler + BTX”), subjects received BTX on the first treatment day and in Group 3 (“Threads + Filler + BTX”), 2 weeks after threads administration. 2.4 Evaluation and Statistical Analysis Patient satisfaction was evaluated using the FACE‐Q questionnaire for the nose before treatment, 1 month and 1 year after treatment (short‐term and long‐term assessment). The nose FACE‐Q module is a part of the FACE‐Q scales, it consists of 10 validated questions concerning patient's satisfaction with nose and four questions about adverse effects regarding the nose. For each question, a 4‐point scaling is used, with 1 being the minimal score for each question, and 4 being the maximal assessment. Individual scores were calculated as total out of 40 for patient satisfaction questionnaire or 16 for adverse effects. Nose adverse effects evaluation using FACE‐Q was performed by study subjects 48 h and 1 week after treatment for the purpose of safety assessment. Patients receiving threads treatment assessed the adverse effects 48 h and 1 week after the thread treatment but not after the additional treatment (BTX and, if needed, HA filler) 2 weeks later. Statistical analysis was conducted using SPSS software. Nonparametric tests were used due to the non‐normal sample distribution. The Wilcoxon signed‐rank test was used for intragroup comparisons, and the Kruskal–Wallis test was employed for intergroup comparisons, with a p < 0.05 considered statistically significant. Patient Selection A total of 85 patients were selected for this study between March 2022 and July 2022. Patients were considered eligible for the study if they had at least one of the following indications: prominent nasal hump, long nose with lack of tip support, hyperactive depressor septi, and lack of nasal tip projection. Subjects were not included if they had previously operated noses, previously treated noses with fillers or threads, or any autoimmune disease. Patients were offered treatment options of nasal thread lifting, nasal fillers, and botulinum toxin injections. Treatment Groups Patients were offered two treatment options: Nasal threads followed by botulinum toxin type A (BTX) injections with or without HA filler 2 weeks later. HA filler with BTX injections. Patients were divided into three groups: Group 1 (“Filler + BTX”): HA filler and BTX injections ( n = 63). Group 2 (“Threads + BTX”): Threads followed by BTX injections 2 weeks later ( n = 9). Group 3 (“Threads + Filler + BTX”): Threads followed by BTX injections and HA filler 2 weeks later ( n = 13). Procedure Details 2.3.1 Threads Treatment The Sole Rhinoplasty kit by Aptos was used. It comprised five P (LA/CL) multidirectional barbed threads (USP 2/0, EP 3), each 120 mm in length. Additionally, it included five round tip hollow needles (20 G × 120 mm, straight), one round tip hollow needle (23 G × 80 mm, straight), one lancet point needle (18 G × 40 mm, straight), one lancet point needle (30 G × 13 mm, straight), and a removable needle attachment. Threads were inserted from the nasal tip through a puncture site made by an 18 G needle 2 mm below the nasal tip. Each thread was inserted and covered the length twice between the nasion superiorly and the nasal crest inferiorly (Figure ). 2.3.2 HA Filler Restylane Lyft hyaluronic acid filler was utilized in all cases. In Group 1 (“Filler + BTX”), subjects received HA filler on the first treatment day and in Group 3 (“Threads + Filler + BTX”), 2 weeks after threads administration. In Group 1 (“Filler + BTX”), HA filler was applied to straighten the nasal dorsum and lift the tip through an intercolumellar injection and a tip bolus. The injection was performed using a 27 or 29 G needle, which was perpendicularly inserted over the nasion. Subsequently, the injection was carried out using a 25 G cannula introduced 2 mm below the nasal tip in the subcutaneous plane dorsally and between the medial edges of the lower lateral cartilages inferiorly. The volume of filler injected ranged from 0.5 to 1 mL. In Group 3 (“Threads + Filler + BTX”), HA filler was administered 2 weeks after thread insertion. The dorsum was straightened using a 27 or 29 G needle, perpendicularly injected over the nasion. Additionally, the nasal tip was treated using a 25 G cannula to create a tip elevation by injecting up to 0.05 mL subdermally. 2.3.3 Botulinum Toxin A Injection For both groups 1 and 2, 4 IU of BTX (Allergan) were injected at the lower edge of the columella to reduce the activity of the depressor septi. In Group 1 (“Filler + BTX”), subjects received BTX on the first treatment day and in Group 3 (“Threads + Filler + BTX”), 2 weeks after threads administration. Threads Treatment The Sole Rhinoplasty kit by Aptos was used. It comprised five P (LA/CL) multidirectional barbed threads (USP 2/0, EP 3), each 120 mm in length. Additionally, it included five round tip hollow needles (20 G × 120 mm, straight), one round tip hollow needle (23 G × 80 mm, straight), one lancet point needle (18 G × 40 mm, straight), one lancet point needle (30 G × 13 mm, straight), and a removable needle attachment. Threads were inserted from the nasal tip through a puncture site made by an 18 G needle 2 mm below the nasal tip. Each thread was inserted and covered the length twice between the nasion superiorly and the nasal crest inferiorly (Figure ). HA Filler Restylane Lyft hyaluronic acid filler was utilized in all cases. In Group 1 (“Filler + BTX”), subjects received HA filler on the first treatment day and in Group 3 (“Threads + Filler + BTX”), 2 weeks after threads administration. In Group 1 (“Filler + BTX”), HA filler was applied to straighten the nasal dorsum and lift the tip through an intercolumellar injection and a tip bolus. The injection was performed using a 27 or 29 G needle, which was perpendicularly inserted over the nasion. Subsequently, the injection was carried out using a 25 G cannula introduced 2 mm below the nasal tip in the subcutaneous plane dorsally and between the medial edges of the lower lateral cartilages inferiorly. The volume of filler injected ranged from 0.5 to 1 mL. In Group 3 (“Threads + Filler + BTX”), HA filler was administered 2 weeks after thread insertion. The dorsum was straightened using a 27 or 29 G needle, perpendicularly injected over the nasion. Additionally, the nasal tip was treated using a 25 G cannula to create a tip elevation by injecting up to 0.05 mL subdermally. Botulinum Toxin A Injection For both groups 1 and 2, 4 IU of BTX (Allergan) were injected at the lower edge of the columella to reduce the activity of the depressor septi. In Group 1 (“Filler + BTX”), subjects received BTX on the first treatment day and in Group 3 (“Threads + Filler + BTX”), 2 weeks after threads administration. Evaluation and Statistical Analysis Patient satisfaction was evaluated using the FACE‐Q questionnaire for the nose before treatment, 1 month and 1 year after treatment (short‐term and long‐term assessment). The nose FACE‐Q module is a part of the FACE‐Q scales, it consists of 10 validated questions concerning patient's satisfaction with nose and four questions about adverse effects regarding the nose. For each question, a 4‐point scaling is used, with 1 being the minimal score for each question, and 4 being the maximal assessment. Individual scores were calculated as total out of 40 for patient satisfaction questionnaire or 16 for adverse effects. Nose adverse effects evaluation using FACE‐Q was performed by study subjects 48 h and 1 week after treatment for the purpose of safety assessment. Patients receiving threads treatment assessed the adverse effects 48 h and 1 week after the thread treatment but not after the additional treatment (BTX and, if needed, HA filler) 2 weeks later. Statistical analysis was conducted using SPSS software. Nonparametric tests were used due to the non‐normal sample distribution. The Wilcoxon signed‐rank test was used for intragroup comparisons, and the Kruskal–Wallis test was employed for intergroup comparisons, with a p < 0.05 considered statistically significant. Results 3.1 Study Subjects A total of 85 patients were enrolled in the study: 17 men (20.0%) and 68 women (80.0%) 18–45 years old, mean (±SD) age was 29.3 ± 6.7 years. Descriptive statistics on patient demographics across treatment groups is shown in Table . Overall, patient age and gender distribution was similar in the three groups. 3.2 Efficacy of Treatments The efficacy of the treatments was evaluated using the FACE‐Q scores for nose 1 month and a year posttreatment. Nose FACE‐Q scores before treatment, 1 month, and a year posttreatment are summarized in Table . At baseline, all groups showed similar nose FACE‐Q scores with overall mean (±SD) of 17.8 ± 3.6, supposing similar level of concern regarding their nasal shape. No statistical difference between the three treatment groups was observed at baseline (Kruskal–Wallis test, p = 0.489). One month posttreatment, the FACE‐Q scores in all the three groups statistically significantly higher than at baseline (Table ), in subjects treated with Filler + BTX and Threads + BTX, the mean nose FACE‐Q scores amounted to 33.4 ± 1.9 and 31.9 ± 2.5, respectively. Comparison of the FACE‐Q scores in three treatment groups revealed statistically significant difference between the groups (Kruskal–Wallis test, p = 0.001). In the Threads + Filler + BTX group, the mean (±SD) nose FACE‐Q score was 36.2 ± 2.6 which was statistically significantly higher compared to both Filler + BTX and Threads + BTX groups (Table ). The long‐term efficacy assessment showed a decrease in nose FACE‐Q score in the Filler + BTX group to 22.9 ± 3.0 one‐year posttreatment. Similarly, the Threads + BTX group showed a decrease to 25.1 ± 2.9 at 1 year. However, the Threads + Filler + BTX group exhibited a smaller reduction in efficacy, with scores decreasing to 30.0 ± 0.7 at 1 year. The FACE‐Q score decrease 1 year after treatment was statistically significant when compared to the corresponding values 1 month posttreatment in all the three groups. However, in all three groups, the FACE‐Q scores remained statistically significantly higher than at baseline (Table ). These findings indicate that the combination of threads, HA filler, and botulinum toxin results in higher patient satisfaction and improved aesthetic outcomes in both the short and long term. One year posttreatment, the statistical difference in nose FACE‐Q scores between the three groups was retained (Kruskal–Wallis test, p < 0.001). The Threads + Filler + BTX group showed statistically significantly better outcomes than the Filler + BTX group and Threads + BTX group. The mean (±SD) nose FACE‐Q score in Threads + Filler + BTX group was approximately by 30% and 20% higher than that in the Filler + BTX and Threads + BTX groups, respectively, obviously due to stronger score decay during the year in these two groups. No significant difference between the Filler + BTX and Threads + BTX groups was observed at either 1 month or 1‐year posttreatment, indicating similar efficacy in patient satisfaction for these two groups. Thus, before treatment, all groups showed similar satisfaction level regarding their nasal shape. One month and 1 year posttreatment, patients in the Threads + Filler + BTX group expressed more satisfaction than those in the other two groups—the Filler + BTX and Threads + BTX groups, which exhibited similar levels of patient satisfaction at these time points (Figures , , , ). 3.3 Adverse Effect Assessment With FACE‐Q The nose adverse effects as per FACE‐Q questionnaire were assessed at two time points: 48 h and 1‐week posttreatment. Overall, at 48 h, the mean FACE‐Q adverse effect score (±SD) was 6.2 ± 1.6. Specifically, in the Filler + BTX group, the mean FACE‐Q adverse effect score was 5.7 ± 1.4, in the Threads + BTX group, it was 8.4 ± 1.4, and in the Threads + Filler + BTX group, it was 7.3 ± 1.0. At 48 h a statistically significant difference in FACE‐Q adverse effect score between the three treatment groups was observed (Kruskal–Wallis test, p < 0.001). As per group pairwise comparison, the adverse effects were statistically significantly more pronounced in the thread‐treated groups (Threads + BTX and Threads + Filler + BTX) compared to the Filler + BTX group due to the higher invasiveness of thread insertion (Table ). One week posttreatment, the mean FACE‐Q adverse effect score in all the three groups reached similar level of 4 points, reflecting complete absence of adverse effects. By 1 week posttreatment, the FACE‐Q adverse effects scores statistically significantly reduced in all groups compared to those at 48 h (Table ). No statistically significant difference between groups was observed (Kruskal–Wallis test, p = 0.840). Study Subjects A total of 85 patients were enrolled in the study: 17 men (20.0%) and 68 women (80.0%) 18–45 years old, mean (±SD) age was 29.3 ± 6.7 years. Descriptive statistics on patient demographics across treatment groups is shown in Table . Overall, patient age and gender distribution was similar in the three groups. Efficacy of Treatments The efficacy of the treatments was evaluated using the FACE‐Q scores for nose 1 month and a year posttreatment. Nose FACE‐Q scores before treatment, 1 month, and a year posttreatment are summarized in Table . At baseline, all groups showed similar nose FACE‐Q scores with overall mean (±SD) of 17.8 ± 3.6, supposing similar level of concern regarding their nasal shape. No statistical difference between the three treatment groups was observed at baseline (Kruskal–Wallis test, p = 0.489). One month posttreatment, the FACE‐Q scores in all the three groups statistically significantly higher than at baseline (Table ), in subjects treated with Filler + BTX and Threads + BTX, the mean nose FACE‐Q scores amounted to 33.4 ± 1.9 and 31.9 ± 2.5, respectively. Comparison of the FACE‐Q scores in three treatment groups revealed statistically significant difference between the groups (Kruskal–Wallis test, p = 0.001). In the Threads + Filler + BTX group, the mean (±SD) nose FACE‐Q score was 36.2 ± 2.6 which was statistically significantly higher compared to both Filler + BTX and Threads + BTX groups (Table ). The long‐term efficacy assessment showed a decrease in nose FACE‐Q score in the Filler + BTX group to 22.9 ± 3.0 one‐year posttreatment. Similarly, the Threads + BTX group showed a decrease to 25.1 ± 2.9 at 1 year. However, the Threads + Filler + BTX group exhibited a smaller reduction in efficacy, with scores decreasing to 30.0 ± 0.7 at 1 year. The FACE‐Q score decrease 1 year after treatment was statistically significant when compared to the corresponding values 1 month posttreatment in all the three groups. However, in all three groups, the FACE‐Q scores remained statistically significantly higher than at baseline (Table ). These findings indicate that the combination of threads, HA filler, and botulinum toxin results in higher patient satisfaction and improved aesthetic outcomes in both the short and long term. One year posttreatment, the statistical difference in nose FACE‐Q scores between the three groups was retained (Kruskal–Wallis test, p < 0.001). The Threads + Filler + BTX group showed statistically significantly better outcomes than the Filler + BTX group and Threads + BTX group. The mean (±SD) nose FACE‐Q score in Threads + Filler + BTX group was approximately by 30% and 20% higher than that in the Filler + BTX and Threads + BTX groups, respectively, obviously due to stronger score decay during the year in these two groups. No significant difference between the Filler + BTX and Threads + BTX groups was observed at either 1 month or 1‐year posttreatment, indicating similar efficacy in patient satisfaction for these two groups. Thus, before treatment, all groups showed similar satisfaction level regarding their nasal shape. One month and 1 year posttreatment, patients in the Threads + Filler + BTX group expressed more satisfaction than those in the other two groups—the Filler + BTX and Threads + BTX groups, which exhibited similar levels of patient satisfaction at these time points (Figures , , , ). Adverse Effect Assessment With FACE‐Q The nose adverse effects as per FACE‐Q questionnaire were assessed at two time points: 48 h and 1‐week posttreatment. Overall, at 48 h, the mean FACE‐Q adverse effect score (±SD) was 6.2 ± 1.6. Specifically, in the Filler + BTX group, the mean FACE‐Q adverse effect score was 5.7 ± 1.4, in the Threads + BTX group, it was 8.4 ± 1.4, and in the Threads + Filler + BTX group, it was 7.3 ± 1.0. At 48 h a statistically significant difference in FACE‐Q adverse effect score between the three treatment groups was observed (Kruskal–Wallis test, p < 0.001). As per group pairwise comparison, the adverse effects were statistically significantly more pronounced in the thread‐treated groups (Threads + BTX and Threads + Filler + BTX) compared to the Filler + BTX group due to the higher invasiveness of thread insertion (Table ). One week posttreatment, the mean FACE‐Q adverse effect score in all the three groups reached similar level of 4 points, reflecting complete absence of adverse effects. By 1 week posttreatment, the FACE‐Q adverse effects scores statistically significantly reduced in all groups compared to those at 48 h (Table ). No statistically significant difference between groups was observed (Kruskal–Wallis test, p = 0.840). Discussion Thread lifting has become an increasingly popular aesthetic procedure due to its effective lifting capabilities and reduced downtime compared to traditional surgical methods. Nonsurgical cosmetic interventions are highly sought after, as many patients have demanding schedules and prefer treatments that yield quick results with minimal complications and social downtime. Our findings indicate that the combination of threads and botulinum toxin provides similar outcomes to fillers and botulinum toxin, with both groups showing statistically significant improvements at 1 month and 1 year posttreatment. While there were no significant differences in efficacy between these two groups, the superiority of barbed threads could be anticipated. The similarity in improvement can be attributed to the nature of the assessment tool, FACE‐Q, which provides a subjective evaluation of patient satisfaction but does not capture specific anatomical details such as nasal hump or tip. Consequently, future studies should incorporate objective assessment methods to capture these nuances more effectively. Clinical observations suggest that comparable improvement observed in the threads + BTX group and Threads + HA filler group may be explained by the recurrence of the nasal hump 1 year after treatment. The high G prime properties of the HA filler likely contribute to superior dorsal support, maintaining nasal contour stability over time. In contrast, while patients in the thread‐treated group exhibited a clinically significant nasal tip lift at 1 year, often more pronounced than that in the HA + BTX group, the nasal hump recurred more frequently in this cohort compared to the HA + BTX group. As noted, the addition of threads to HA filler and botulinum toxin yielded superior results, demonstrating statistically significant higher patient satisfaction in the Threads + Filler + BTX group both 1 month and 1 year after treatment. Specifically, 1 year after treatment a highly statistically significant difference between groups was observed in favor of the Threads + Filler + BTX treatment with a p < 0.001. We acknowledge the statistical population imbalance observed in this study, which arose due to the natural distribution of patients during the data collection period. Clinically, this imbalance may be considered justifiable, as many patients demonstrate a preference for fillers and botulinum toxin injections owing to the shorter procedure time, reduced post‐procedural swelling, and the perception of these treatments as less invasive compared to threads. Additionally, threads are often regarded as a more advanced and potentially aggressive treatment option, which may influence patient decision‐making and account for the smaller group of threads‐treated individuals. This study had limitations, in particular, the relatively small number of threads‐treated patients, and the lack of objective assessment using detailed measurements or 3D imaging software specifically designed for nasal shape evaluation. Future research should aim to incorporate reliable 3D programs for objective nasal shape analysis, capable of reporting skin surface changes and potential asymmetries, to enhance the precision and reliability of outcomes. Conclusion This study demonstrates that the combined modality of botulinum toxin, HA filler, and barbed threads is safe and provides superior short‐term and long‐term outcomes for nonsurgical rhinoplasty. The addition of HA filler and botulinum toxin at the nasion and nasal tip 2 weeks after thread insertion produced more rewarding results, enhancing both immediate and sustained aesthetic improvements. Although thread treatments initially had more pronounced adverse effects, these were temporary and resolved within a week. Despite the study's limitations, including small sample size in thread‐treated groups and the lack of objective assessment tools like 3D imaging, the findings support the effectiveness and safety of this combination approach. Future research should aim to incorporate advanced imaging techniques for more precise evaluations. Overall, the combined use of barbed threads, HA fillers, and botulinum toxin offers a promising and effective alternative to surgical rhinoplasty. G. Ziade designed the study, wrote initial manuscript, and provided data interpretation. R. Saade contributed to data collection, interpretation, and manuscript writing. D. Daou performed statistical analysis. D. Karam assisted in data collection and study design. A. Bendito revised the manuscript, and M. Tsintsadze contributed to the study design. Study protocol was approved by the Ethics committee of Lebanese University. Informed consent, consent to reproduce their recognizable photographs, and consent for publication were obtained from every study subject. The authors declare no conflicts of interest. |
Selective Ligands for Non-Canonical DNA Structures: Do They Have a Future in Medicinal Chemistry? | ed07ccb2-8ced-45d8-bdd5-6b5ee8438942 | 9570084 | Pharmacology[mh] | The paper by Folini and colleagues offers a thoroughly updated critical view on the possibility of targeting of G4 in cancer, considering the vast repertoire of G4 binders (over 1000) as compared to the paucity of clinical trials (only two) started 6 years ago, the results of which are not available at present. The paper considers the issues of multiple mechanisms of action, synthetic lethality targeting, and adaptive responses. The conclusion follows that several challenges need to be addressed to obtain pharmacologically relevant G4 binders. These include high selectivity in target recognition, accurate unified screening procedures, and rational modifications of the molecular scaffold to grant both efficient binding and favorable pharmacokinetic and toxicity (ADMET) properties. G4s represent conserved features in the evolution tree, which strengthens the idea of a vital role played in biological systems, including viruses. Hence, not only can anticancer agents be targeted at G4, but also antiviral compounds. Richter and colleagues carefully investigated several DNA and RNA viruses, showing G4 regions both in the genome and mRNA. The addition of G4 ligands produces modulatory effects depending upon the type of virus and ligand structure. It should be mentioned that the tested ligands did not show elevated levels of G4 preference, being able to bind off-target double helical sequences as well, which are largely prevalent in the genome. The observed biological effects included promoter activity modulation, virion trapping, decreased or inhibited genome replication, and specific gene expression modulation. The authors conclude that G4 targeting could represent a valid approach to manage viral infections. The challenge here is to improve ligand selectivity for viral G4s to avoid serious side-effects arising from interference with human G4s. In their contribution, Lee and colleagues describe the structural effects observed by chemical modification of nucleotides. Methyl, Br, and aryl substituents were incorporated into purine C8 and pyrimidine C5 positions. When introduced into a DNA chain, these modifications affect the local conformational properties and may help us understand the effects of covalent base alterations in genetic and epigenetic processes. As mentioned before, this topic is quite popular, but, until today, no really selective agents were described. In principle, it should be possible to efficiently discriminate among G4s as they are generally inserted in different sequence context, adopt precise G4 conformational state(s), and exhibit sequentially distinct intervening loops. However, different conformations are separated by small energy barriers and can easily interconvert. Finally, the flat positively charged scaffold of the ligands can partially intercalate, with the double helical portion of DNA acting as a “sink” preventing specific G4-related effects by mass action. Among new ligands, a group of bis-triazolyl pyridines was synthesized and investigated by Di Porzio and colleagues . The design of these molecules fulfilled three requirements: a planar aromatic system for stacking purposes, a V-shaped form to maximize interactions with G4, and two or three positively charged groups to reinforce binding. Biophysical studies on G4s are generally performed using powerful spectroscopic techniques, such as circular dichroism, NMR, and fluorescence measurements to facilitate the identification and quantitative determination of G4 species present in complex mixtures. A multivariate analysis considering G4/iM (intercalated Motifs C-rich quadruplex DNA) modulation indicated two derivatives, able to stabilize G4 while destabilizing iM, which deserve further investigation. Marzano and colleagues show that the pharmacologically significant targeting of G4 ligands can also be applied to RNA-containing species, such as TERRA, a tetraplex-folded sequence modulating telomerase activity, heterochromatin formation, and homologous recombination. Virtual screening methods in tandem with experimental testing allowed compound BPBA to be identified, consisting of two benzimidazole moieties connected through an aniline residue as a very efficient G4-RNA binder. Biological studies in cell systems confirm the chemical results in vitro by showing specific interference with TERRA activity. It should be noted that TERRA interacts with telomeric chromatin, forming a hybrid DNA-RNA quadruplex. Analytical studies are performed by means of the above-mentioned spectroscopic techniques. Nowak-Karnowska and colleagues measured the fluorescence of 9-methoxy luminarine in the presence of G4-forming sequences. Substantial signal quenching was found in the presence of the parallel c-MYC G-quadruplex, possibly because of stacking interactions which involve the planar aromatic region of interacting species. The test ligand cannot induce G4 formation or stabilize preformed G4 structures. The above properties suggest the use of fluorescent measurements using 9-methoxy luminarine to preliminarily assess parallel vs. non-parallel G4 topologies. Using NMR and modeling techniques, Dallavalle and colleagues investigated the binding of Curaxins, in particular CBl0137, to non-canonical DNA structures. This was originally not included in the proposed mechanisms of action, but recent findings seem to support G4 binding as well. To confirm this hypothesis, NMR studies were performed with the human telomere and the c-MYC promoter sequences. In both cases, curaxin was bound to G4s, forming two types of complexes with 1 or 2 ligands bound per oligonucleotide unit. Moreover, curaxin intercalates into double-stranded DNA, demonstrating poor binding selectivity. The real role of drug binding to G4 structures in the presence of double-stranded DNA requires competition measurements to assess drug distribution between G4 and B form. Continuing with NMR spectroscopy, Krafčík and colleagues discuss the in-cell technique to examine the quantitative binding of low-molecular-weight ligands with nucleic acids, enabling a high-resolution readout on structure and interactions of targeted species. This technique is highly valuable as the measurements are made at physiologically relevant conditions. Unfortunately, the in-cell 1 H study is hardly applicable to polymorphic G4s and their complexes due to the substantial broadening and overlapping of resonance peaks. To overcome this drawback, G4 constructs were labelled with a 3,5-bis(trifluoromethyl)phenyl tag. The use of 19 F-detected in-cell NMR may thus represent a valuable methodology to be applied in profiling G4–ligand interactions in vivo. Despite the huge amount of work carried out on G4 binders, there is still no resolute answer to the original question on whether agents targeted at non-canonical nucleic acid sequences will eventually become effective and safe drugs. In fact, several issues (see also above) should be more comprehensively approached and thoroughly dissected. Design ligands with higher selectivity for a given G4 arrangement taking advantage of the local environment and the nature (sequence, orientation, and length) of the connecting loops. The charged nature of these molecules might not afford the best conditions for selectivity, considering that cationic binders will exhibit non-negligible electrostatic binding affinity for the canonical double helical form too. Find a rationale for discriminating among the various non-canonical nucleic acid conformations by test ligands. Quantitate canonical vs. non-canonical binding distribution within a cell, considering the large prevalence of B-DNA at pharmacologically relevant conditions Make sure that the experimental setting in vitro does not create artifacts by stabilizing species not occurring in vivo. Define standard protocols of investigation to properly compare results from different labs, with a particular focus on identifying a few reference nucleic acid sequences to be used. Develop artificial intelligence algorithms to unveil particular ligand features conferring high specificity and selectivity. Make sure to fulfil ADMET requirements prior to implement costly and time-consuming synthetic efforts. Consider the possibility of simultaneous recognition by the binder of two or more G4 arrangements close in space, conferring a higher degree of selectivity. Do not neglect the kinetic aspects of the binding, which might discriminate fast-forming species from slowly assembling structures. Currently, the road to success appears challenging since much work (and time) is required to contend with the basic problems. However, it is worth mentioning that immunotherapy drug development took a few decades of poorly successful efforts before producing blockbuster checkpoint inhibitors. Hence, we are confident in predicting a way to rationally transform a specific G4 ligand into a real drug in a not-too-distant future. |
Simplifying Mismatch Repair Deficiency Screening in Endometrial Adenocarcinoma: Immunohistochemistry with Two-Antibody Panel (PMS2 and MSH6) | 47c5663c-acdb-46e6-b679-3b09cba4dbc5 | 11711341 | Anatomy[mh] | Mismatch repair deficiency (dMMR) is a well-established characteristic of endometrial adenocarcinoma that plays a crucial role in Lynch syndrome screening, guiding adjuvant treatment decisions, and identifying potential candidates for immune checkpoint inhibitors [1, 2]. The MMR system plays a crucial role in maintaining genomic stability by rectifying DNA replication errors and preventing the accumulation of mutations. Inherited or acquired defects in MMR genes can lead to dMMR, which is a hallmark of Lynch syndrome, an autosomal dominant cancer predisposition syndrome that increases the risk of various cancers, including endometrial cancer [3]. Approximately 3% of all endometrial cancer cases are linked to Lynch syndrome, and up to 60% of dMMR endometrial cancers are associated with this syndrome [4]. A recent study in Thai endometrial cancer patients reported a detection rate of dMMR in 34.9% of cases, suggesting the consideration of MMR immunohistochemistry in all patients, regardless of personal or family history of Lynch syndrome-related cancers [5, 3]. The current guideline for dMMR screening in endometrial cancer involves utilizing a four-antibody panel to assess the expression of MMR proteins (MLH1, MSH2, MSH6, and PMS2) through immunohistochemistry (IHC) [6]. This four-antibody panel is recommended by several professional societies and widely recognized as the standard approach for dMMR screening in endometrial cancer [7, 6, 8]. Recent analyses have yielded compelling evidence that substantiates the cost-effectiveness of employing the MMR IHC approach for germline testing [9, 10]. Moreover, it is important to note that MMR testing is not limited to endometrial cancer alone. It also holds significant value for patients diagnosed with colorectal cancer and other types of carcinomas that are associated with Lynch syndrome, including ovarian, stomach, and urothelial carcinomas, along with their respective families [11]. The accurate repair of DNA relies on the essential role of mismatch repair (MMR) proteins, which function as heterodimer complexes. Specifically, MLH1 forms a stable heterodimer with PMS2 , while MSH2 pairs with MSH6 . In cases where there is a loss of protein function in PMS2 or MSH6 , MLH1 and MSH2 can form heterodimers with alternative proteins. Consequently, when evaluating MMR protein expression using IHC, negative staining is expected for both MMR proteins within the affected heterodimer [12]. Recent reports have suggested a simplified strategy for dMMR screening in endometrial adenocarcinoma, which involves the use of only two antibodies, PMS2 and MSH6 [13-15]. This study aims to compare the diagnostic performance of a simplified dMMR screening strategy utilizing only PMS2 and MSH6 IHC, as opposed to the traditional four-antibody panel, in a cohort of endometrial cancer samples. We hypothesize that PMS2 and MSH6 IHC can effectively replace the four-antibody panel without compromising the accuracy of dMMR detection. The results of this study may have significant implications for the clinical management of endometrial adenocarcinoma patients by simplifying the screening process, reducing testing costs, and improving the detection of Lynch syndrome in affected individuals and their families. This retrospective cohort study included endometrial carcinoma patients diagnosed between 2013 and 2022 at King Chulalongkorn Memorial Hospital, Bangkok, Thailand. Patients diagnosed with endometrial cancer and treated during the specified period were identified using the ICD10 code C54 for endometrial cancer from electronic medical records. The inclusion criteria for this study were patients with histologically confirmed endometrial carcinoma who received treatment at the hospital between 2013 and 2022 and underwent dMMR screening using the four-antibody panel. Patients with prior chemotherapy and radiation therapy, incomplete medical records, or unclear/inexplicable MMR staining results were excluded from the study. The data collection process involved retrieving clinical data from medical records, including age at diagnosis, menopausal status, histologic subtype, and cancer staging according to the 2018 International Federation of Gynecology and Obstetrics (FIGO) uterine cancer staging system. Family history of cancers and Lynch-related cancers were also collected. The Revised Bethesda guidelines were evaluated in each patient. Formalin-fixed, paraffin-embedded tissue sections from hysterectomy specimens were utilized for IHC analysis. Hematoxylin and eosin-stained slides from each patient were reviewed to confirm the diagnosis and select representative tumor areas for IHC analysis. Specialized pathologists reviewed the IHC results for MLH1, MSH2, MSH6, and PMS2 antibodies. In cases where discrepancies arose in the results, consensus review was conducted to resolve them. The assessment of normal expression of MMR proteins entailed the examination of nuclear staining within tumor cells as a reliable indicator. To establish a positive internal control, nuclear staining within infiltrating lymphocytes and/or normal stromal cells was employed. MMR deficiency or the loss of expression signified the absence of detectable levels in at least one of the four MMR proteins. Specifically, in the context of the two-antibody panel, dMMR precisely indicated the loss of expression in either PMS2 or MSH6. Conversely, in the four-antibody panel, dMMR corresponded to the loss of expression in one or more of the four proteins. Patients with dMMR underwent genetic counseling with a geneticist. Following genetic counseling, germline testing was conducted by extracting DNA from saliva or peripheral blood samples upon agreement. If a germline mutation was detected, comprehensive cancer surveillance was provided to the entire family. Statistical analysis was performed using SPSS version 22.0 (IBM Corporation, Armonk, New York, USA). Quantitative data were analyzed and presented as mean ± standard deviation (SD), while qualitative data were reported as frequencies and percentages. The agreement between the two MMR staining methods was assessed using Cohen’s kappa coefficient. The interpretation of kappa values is as follows: values ≤ 0 indicate no agreement, 0.01–0.20 represent none to slight agreement, 0.21–0.40 indicate fair agreement, 0.41 – 0.60 suggest moderate agreement, 0.61 – 0.80 imply substantial agreement, and 0.81 – 1.00 indicate almost perfect agreement [16]. Between January 2013 and December 2022, a total of 304 endometrial cancer patients included in this study. The participants had a mean age of 58.1 ± 11.9 years (ranging from 20 to 85 years). Among them, 24 patients (7.9%) were below 40 years of age, and 61 patients (20.1%) were between 40 and 50 years old. 214 patients (70.4%) were menopause at the time of surgery. Additionally, 82 patients (27%) had a body mass index (BMI) exceeding 30 kg/m 2 , with a mean BMI of 27.2 kg/m 2 for the overall study population. Regarding family history, 52 patients (17.1%) had at least one family member with any type of cancer, while 44 patients (14.5%) had at least one family member affected by Lynch-related cancers. Based on the Bethesda guidelines, 77 patients (25.3%) met the criteria for further evaluation. Of the total patients, 232 patients (76.3%) were categorized as stage I, 14 patients (4.6%) as stage II, 48 patients (15.8%) as stage III, and 10 patients (3.3%) as stage IV. In terms of histology, 278 patients (91.4%) were classified as endometrioid, 10 patients (3.3%) as mixed adenocarcinoma, 9 patients (3%) as carcinosarcoma, and 7 patients (2.3%) as papillary serous carcinoma. Regarding tumor grade, 153 patients (50.3%) were grade 1, 72 patients (23.7%) were grade 2, and 79 patients (26%) were grade 3 . Among the patients, 82 (27%) demonstrated the loss of expression in at least one MMR protein using the four-antibody panel. Specifically, out of these 82 patients, 54 showed the loss of MLH1 and PMS2 expression, 16 showed the loss of MSH2 and MSH6 expression, 6 showed the loss of only MSH6 expression, 5 showed the loss of only PMS2 expression, and one patient showed the loss of only MSH2 expression. Notably, the patient who showed the sole loss of MSH2 expression underwent germline testing, and the results were intriguing as they revealed no MMR gene mutation. Using the two-antibody panel, dMMR was detected in 81 patients (26.6%), with only one patient showing the loss of any MMR protein using the four-antibody panel that could not be detected using the two-antibody panel . Overall, the results from the two-antibody panel agreed with the four-antibody panel in 98.8% (81/82) of patients. The agreement between the two-antibody panel and four-antibody panel was measured using Kappa correlation, yielding a value of 0.992 (SD = 0.008, p-value < 0.001). Out of the 59 patients that had loss of PMS2 expression, 54 patients also showed loss of MLH1 expression. Similarly, all 54 patients with loss of MLH1 expression exhibited loss of PMS2 expression. In patients who had loss of MSH6 expression (n = 22), 16 of them also displayed loss of MSH2 expression. The importance of dMMR in endometrial adenocarcinoma cannot be overstated. dMMR is a well-established characteristic of this cancer type and is associated with a favorable prognosis and responsiveness to immune checkpoint inhibitors [17]. In our study, we observed that approximately 27% of the endometrial cancer patients demonstrated loss of expression in at least one MMR protein using the four-antibody panel. This finding is consistent with previous reports highlighting the prevalence of dMMR in endometrial cancer. Previous study in the Thai endometrial cancer patients in 2021 reported that 34.9% of surgical specimens had one or more MMR deficiencies [5]. In 2022, a study from Iran found that 23% of patients were MMR-deficient identified through IHC screening [18]. Notably, the majority of dMMR patients in our cohort showed loss of MLH1 and PMS2 expression, underscoring the importance of these proteins in the development of dMMR. The traditional approach to dMMR screening in endometrial cancer involves the use of a four-antibody panel comprising MLH1, MSH2, MSH6, and PMS2. This four-antibody panel has been widely adopted as the standard approach for dMMR screening in endometrial cancer based on recommendations from professional societies. However, our study explores the potential of a simplified screening strategy utilizing only PMS2 and MSH6 IHC. This approach is motivated by the frequent co-expression of PMS2 and MSH6 in endometrial tumors and the higher prevalence of MSH6 inactivation in endometrial cancer [19]. By utilizing only two antibodies, this simplified approach aims to streamline the screening process, reduce testing costs, and improve the detection of Lynch syndrome, a cancer predisposition syndrome associated with dMMR. Our study demonstrates promising concordance between the two-antibody panel and four-antibody panel. By using the two-antibody panel, we detected dMMR in 26.6% of patients, with only one patient showing the loss of MSH2 expression using the four-antibody panel that could not be detected using the two-antibody panel. This patient performed germline testing. Interestingly, the germline testing revealed no MMR gene mutation. Furthermore, the implementation of the two-antibody panel offers several advantages, including reduced testing costs, streamlined workflow, and improved overall efficiency. Specifically, by adopting the two-antibody approach, we can save approximately 1,420 THB (41.26 USD) per test, significantly reducing the total cost from 2,940 THB (85.43 USD) for the four-antibody panel to 1,520 THB (44.17 USD) for the two-antibody panel. This cost reduction is especially significant, considering the large number of endometrial adenocarcinoma patients that may require dMMR screening. The high agreement rate of 98.8% between the two methods further supports the reliability and validity of the simplified approach. By simplifying the dMMR screening process in endometrial adenocarcinoma, our study has potential clinical implications. The two-antibody panel not only streamlines the diagnostic process but also enhances accessibility to dMMR screening, particularly for patients without a personal or family history of Lynch syndrome-related cancers. Additionally, the simplified approach has the potential to improve the detection of Lynch syndrome in affected individuals and their families, enabling appropriate genetic counseling and comprehensive cancer surveillance. Nevertheless, it is important to acknowledge the limitations of our study. The retrospective nature of the study design and the use of data from a single institution may introduce selection bias. Further studies with larger multicenter cohorts are warranted to validate the findings and assess the generalizability of the simplified approach. Additionally, long-term follow-up and evaluation of patient outcomes are necessary to fully understand the clinical impact of adopting the two-antibody panel in dMMR screening. In conclusion, our study provides evidence supporting the use of a simplified two-antibody panel utilizing PMS2 and MSH6 IHC in endometrial adenocarcinoma. This approach demonstrates a high level of concordance with the traditional four-antibody panel, indicating its potential as an alternative method for reflex MMR status testing. The implementation of this simplified approach has the potential to streamline the diagnostic process, reduce costs, and improve the detection of Lynch syndrome in affected individuals and their families. Further research with larger cohorts is warranted to validate our findings and investigate the concordance of germline testing with the two-antibody panel, thereby assessing the broader clinical implications of this approach in routine practice. Pinyada Panyavaranant: Conceived and designed the analysis, collected data, contributed data or analysis tools, performed the analysis, wrote the paper. Natkrita Pohthipornthawat: Collected data, reviewed pathology, contributed data or analysis tools. Tarinee Manchana: Conceived and designed the analysis, provided advice on the concept, wrote the paper. |
Follicular dendritic cell sarcoma involving the parotid gland with expression of the melanocytic marker PRAME | 05f3b488-34c4-4fb5-99ac-e8643c759aeb | 11634961 | Anatomy[mh] | Soft tissue sarcomas are rare tumors comprising roughly 1% of malignancies in adults. Despite their rarity, they exhibit a substantial mortality rate, contributing to about 3–4% of cancer-related fatalities each year . FDC sarcoma is an exceedingly uncommon form of sarcoma characterized by its low to intermediate malignant nature. It originates from follicular dendritic cells, yet instances of its occurrence in extranodal locations such as the mediastinum, gastrointestinal tract, liver, and spleen have also been documented . Only four prior occurrences of intra-parotid FDC sarcomas have been documented . Our patient is a 65-year-old male who presented with a right parotid mass and bilateral neck lymphadenopathy. He had an undocumented history of a cutaneous right cheek lesion that was previously biopsied and thought to represent B-cell lymphoma. On examination, there was a firm mass at the right parotid tail. There was also a palpable right neck lymphadenopathy in level 2A and level 3. Magnetic resonance imaging (MRI) of the neck showed a 5.1 × 4.5 × 8.3 cm enhancing heterogeneous T2 hyperintense lesion involving the right superficial parotid gland. Initially, an ultrasound-guided core biopsy was performed, which showed a poorly differentiated neoplasm, suggestive of FDC sarcoma. The patient then underwent right total parotidectomy and bilateral neck dissection. Sections of the parotid mass showed an infiltration of large cells with irregular nuclei, vesicular chromatin, prominent nucleoli, and moderate cytoplasm. A subset of the cells showed atypia with enlarged, highly irregular, and hyperchromatic nuclei. The malignant cells expressed CD21, CD23 (subset), CD35 (small subset), CXCL13 (subset), vimentin, fascin, and clusterin, suggestive of FDC origin (Fig. ). The malignant cells also expressed CD4 and CD5 (subset) but were negative for all other T-cell markers (CD2, CD3, CD7, CD8, CD43, TIA-1, BF-1). Since a subset of FDC sarcomas can be associated with indolent T-lymphoblastic proliferations, TdT stain was performed and is negative. EBV was negative by in situ hybridization (EBER). Podoplanin (D2-40), which can be utilized as a marker for follicular dendritic cells was negative in our case. In addition, the malignant cells were positive for PRAME but negative for all other melanoma markers (S100, HMB45, Melan A, and SOX10). The infiltrate involved the parotid gland parenchyma and directly adjacent lymph nodes. Table illustrates the different antibody clones used in the case. A next-generation sequencing (NGS) test was performed (Tempus 648 genes xT panel) and it detected 8 likely pathogenic somatic variants, including TP53, RB1, and FBXW7 loss-of-function variants. B-cell gene rearrangement studies by polymerase chain reaction (PCR) were performed but showed inconclusive results. Table illustrates the different mutations detected along with their variant allele frequency (VAF). Taken together, the overall picture supports a diagnosis of follicular dendritic cell (FDC) sarcoma. A follow-up appointment was arranged with the Radiation Oncology department for further assessment and management. Follicular dendritic cells are a specialized type of dendritic cells that are largely restricted to lymphoid follicles. They form dense three-dimensional meshworks within benign follicles, which maintain the follicular architecture . FDC sarcoma is a neoplastic proliferation of cells showing morphologic and immunophenotypic features of follicular dendritic cells . The etiology of that neoplastic transformation is unknown although it may evolve in situations in which there is FDC hyperplasia and overgrowth . It usually occurs de novo; however, it can sometimes occur in association with hyaline vascular Castleman disease, whether simultaneously or as a succeeding event . It presents as a painless solid mass, usually nodal (mainly cervical lymph nodes) but it can also involve extra nodal sites, such as tonsils, spleen, skin, and gastrointestinal tract . A new variant has been recently described: EBV-positive inflammatory follicular dendritic cell tumor and is reported to occur exclusively in the liver and spleen, exhibit more interspersed lymphoplasmacytic infiltrate, and express EBV by in situ hybridization . Overall, FDC sarcoma is considered a low-grade sarcoma that has a significant recurrence rate in nearly half the cases, and it also can metastasize . Surgical resection remains the best treatment for these tumors. Histologically, these tumors can be difficult to diagnose, as the morphological spectrum is broad and often causes confusion. Cytological atypia is present only in a subset of cases and mitotic figures are common but highly variable in number. By immunohistochemistry, FDCs express CD21, CD23, CD35, CXCL13, and clusterin. They also usually express vimentin, fascin, HLA-DR, and EMA and variably positive for CD68, S100, and CD45 . Clusterin staining is reported to be highly sensitive (100%) and specific (93%) and along with CD21 and CD23, constitute the essential stains required to establish a definitive diagnosis . PRAME stain exhibits diffuse positivity in most melanomas, while typically presenting as negative or showing limited and focal immunoreactivity in nevi . Variable degrees of PRAME staining have been sporadically observed in other malignant tumors, including most synovial sarcomas, myxoid liposarcomas, and malignant peripheral nerve sheath tumors (MPNST) . Other neoplasms such as seminomas and carcinomas of various origins including endometrial, serous ovarian, mammary ductal, lung, and renal showed an intermediate proportion of cases and variable extent of tumor cells positive for PRAME protein expression . To our knowledge, PRAME positivity has not been reported in FDC sarcoma before. In our case, PRAME is positive but all other melanoma markers (S100, HMB45, Melan A, and SOX10) are negative. Few FDC sarcoma cases with aberrant phenotype have been reported before including a case of intra-abdominal FDC sarcoma with pleomorphic features and aberrant expression of neuroendocrine markers , an unusual case of FDC sarcoma of the omentum with pleomorphic morphology and aberrant cytokeratin expression , another case with aberrant T-cell antigen expression , and a clinicopathologic study of 15 FDC cases with expression of MDM2, somatostatin receptor 2A, and PD-L1 . Although genetic drivers for tumorigenesis in FDC are largely unknown, recent genomic profiling studies have revealed several recurrent gene alterations in FDC sarcoma, including BRAF V600E mutation and loss-of-function variants in tumor suppressor genes involved in the regulation of NF-κB pathway and cell cycle, such as NFKBIA, CYLD, CDKN2A, and RB1 genes . In addition, genomic profiling for one patient with primary esophageal follicular dendritic cell sarcoma revealed pathogenic variants in multiple genes, including CHEK2, FAT1, TP53, DPYD, ERBB2IP, FBXW7, KMT2D, PPP2R1A, and TSC2 . The NGS results for this patient identified loss-of-function pathogenic variants in RB1 (p.W516*), TP53 (p.G187D), and FBXW7 (p.S294fs), which have been reported previously in FDC sarcoma patients, supporting the FDC sarcoma diagnosis. In conclusion, we report a case of FDC sarcoma with an unusual extranodal localization in the parotid gland. Furthermore, the aberrant positive expression of the melanocytic marker PRAME has not been reported before. All other melanocytic markers were negative in our case and the characteristic FDC markers are positive. |
An unusual suicide by self-waterboarding: forensic pathological issues | c987ffcb-971b-4fad-9fcd-bc7c105ded67 | 8523495 | Pathology[mh] | Waterboarding (WB) , also called water torture or simulated/controlled drowning, is a method of military torture in which water is poured into the nostrils and the mouth of a victim who lies on his back on an inclined platform, with his feet above his head (Trendelenburg position) . The victim’s hands and feet are always markedly tied or blocked by other people. The gag reflex is stimulated by water which fills the oropharynx. In this way, the air is completely expelled from the lungs, leaving the victim unable to exhale and incapable of inhaling without aspirating water. Furthermore, the victim’s mouth and nose are covered with a hydrophilic cloth or a canvas bag, which allow water to enter the airways but prevent it from being expelled. Although water usually enters the lungs, it does not immediately fill them, owing to their elevated position with respect to the head and the neck. The victim cannot therefore control the water flow and may be made to drown for short periods without suffering fatal asphyxiation by drowning. The torture is eventually halted, and the victim is put in an upright position to allow him to cough and vomit the water ingested or to revive him whether he is unconscious, after which the torture may be resumed. Waterboarding causes extreme physical suffering and an uncontrollable feeling of panic and terror . This case report shows the forensic pathological description of a fatal case of waterboarding. The victim was a 22-year-old male student, who was found dead in the bathtub of his own house. The body was naked, the head was covered by a soaked canvas bag and both his hands were firmly tied with two nylon ropes and bound with a padlock. The water jet of the showerhead was specifically directed at the victim’s head, so that the canvas bag could be soaked with water. The prosecutor ordered an on-site forensic investigation, followed by a judicial autopsy with toxicological, histopathological, and genetic analyses. The evaluation of possible suicides is not, however, always straightforward, and it may be complicated by atypical death scene and autopsy findings. In forensic pathology, the differential diagnosis between homicides and suicides may be very challenging . In these circumstances, the forensic pathologist should therefore analyze all possible evidence, based on information collected during the on-site judicial investigation and offered by the police. Finally, information should be analyzed altogether with autopsy data and laboratory findings. A 22-year-old male student did not show up for an appointment in the first afternoon, and as a consequence his parents were contacted. Thus, they called the janitor of the apartment building where the young man lived, as he did not answer the phone. The janitor, who had an extra set of keys of all the flats, entered the man’s house, whose door was closed. The janitor heard the water flowing in the bathroom and found him naked and unconscious, lying on the bathtub. She promptly alerted the emergency services. The on-site judicial investigation Upon arrival of the emergency team, the body was lying naked in the bathtub. The head was completely covered by a soaked canvas bag, held around the neck by a white nylon rope, and reached by the water jet coming from the showerhead. The emergency team turned off the water, cut the white rope around the neck, and partially lifted the canvas bag to expose his face (Fig. ). The man’s death was declared, and the emergency team did not modify the scene any further. The police and the forensic experts team then arrived at the scene. The door and the windows did not show any signs of forced entry, and the inside of the flat was clean and ordered. In the kitchen there were only a cup of coffee and a glass of water, laying on the table. On a closer examination, the victim presented a complex system of bindings. The two handles of the canvas bag were tied up to the upper limbs, with a grip at both the axillae. Each wrist was tied with a single nylon rope (Fig. ). In particular, the two nylon ropes were firmly rolled up in several loops all around his wrists and his hands. The left forearm was set on the back, the left hand close to the right hip. In this way, the hands could be placed near to each other, with a padlock that was binding the two ropes (Fig. ). A pair of scissors and the padlock key were found beneath the body, not far away from the hands (Fig. ). The rest of the nylon ropes was found on the floor of a bedroom (Fig. ). Protective plastic bags were placed on his hands and his wrists to prevent contamination. At the scene, the rectal temperature was 32.5 °C (environmental 24.0 °C), post-mortem lividity was intense, and it partially disappeared with pressure on the back. Rigor mortis was documented at the temporomandibular joints, the neck, and the main joints of the upper and lower limbs. The post-mortem interval was calculated based on the temperature relating to the nomogram by Henssge. The time of death was limited to between 5 and 12 h before the on-site investigation. The family members were interviewed by the police. They reported that the victim did not suffer from psychiatric diseases or socio-economic difficulties. Furthermore, they reported that he had never showed suicidal proposals and/or attempts. A suicidal letter was not left by the victim. The prosecutor ordered a judicial autopsy at the Milan Institute of Legal Medicine 36 h after the on-site judicial investigation. Autopsy examination Prior to autopsy examination, face, neck, wrists, hands, and external genitalia were swabbed in order to avoid any contaminations. Also, the free margin of the nails, the nylon ropes, the padlock, and the canvas bag were collected for a subsequent forensic genetic analysis, which was requested by the prosecutor. External examination indicated that the body was in good state of preservation (weight 70 kg, length 180 cm), with rigor mortis presented both at the neck, upper, and the lower limbs (the corpse was refrigerated). Intense post-mortem lividity was present on the back and fixed; furthermore, there were no conjunctival petechial hemorrhages. A current mark was not documented on the body. At autopsy, the body did neither show blunt force injuries nor defensive wounds. In particular, the upper limbs did not show any injuries. The hyoid bone and both the laryngeal superior cornua were undamaged. A pinkish frothy fluid was observed in the trachea and the main bronchi, but no foam was present in his mouth or his nostrils. The lungs were markedly overinflated (right = 1180 g; left = 1045 g), filling the thoracic cavity. The surface was pale and crepitant, with subpleural petechiae. The pulmonary parenchyma was waterlogged, with some areas of intrapulmonary bleedings. Furthermore, a lot of red-tinged frothy fluid exuded from the bronchi on the cut section. Upon autopsy, bilateral hemorrhages within the petrous temporal bones were observed. About 50 cc of brownish fluid material were found in the stomach, without any food traces. The heart, the abdominal viscera, and the pelvis did not show any gross lesions, and nothing else was observed upon autopsy. Viscera specimens (the brain, lungs, liver, kidneys), biological fluids (femoral and cardiac blood, bile, urine, and gastric content), hair, and nasal swabs were sampled for subsequent toxicological analyses. Samples of the brain, heart, lungs, stomach, liver, spleen, and kidneys were also collected for histopathologic examination. A specimen of psoas muscle was also sampled for forensic genetic analysis. All the analyses were authorized by the prosecutor. Laboratory analyses Regarding the forensic genetic analysis, the PCR amplification only revealed the victim’s DNA, which was compared with the sample of psoas muscle collected during the autopsy examination. Toxicological analyses were performed in accordance with the protocols adopted in the Milan Institute of Legal Medicine. Alcohol concentrations were analyzed by gas chromatography (GC) in specimens of femoral blood, gastric content, and the brain: all of them resulted to be negative. Specimens of urine and cardiac blood, tested by ELISA immunoassay, were analyzed for illicit psychotropic drugs, which were negative. In addition, no medicinal drugs and non-volatile toxic substances were found in urine, cardiac blood, or bile, which were analyzed by GC and liquid chromatography (LC). Finally, no drugs were detected in hair sample and nasal swabs. Samples of the brain, heart, lungs, stomach, liver, spleen, and kidneys underwent standard post-fixative histopathologic examination. Slides were stained with hematoxylin and eosin (HE) and Masson’s trichrome staining (MT). Histologic slides of the brain, stomach, and kidneys showed post-mortem autolytic changes. Slides of the heart revealed wavy myocardial fibers, with a moderate fibrosis of the interstitium space. The spleen showed hyperemia, while the liver showed microvesicular steatosis. The pulmonary parenchyma showed a massive edema, with some areas of acute emphysema and hemorrhagic foci (Fig. ). This latter morphological pattern can be defined as emphysema aquosum , since the edema fluid in the bronchi blocks the passive collapse that normally occurs at death, holding the lungs in the inspiratory position. The other organs did not show any abnormalities. Finally, the cause of death was identified as an asphyxiation by drowning in combination with direct suffocation caused by the soaked canvas bag, in the context of waterboarding practice. Toxic substances and natural diseases were not documented. Upon arrival of the emergency team, the body was lying naked in the bathtub. The head was completely covered by a soaked canvas bag, held around the neck by a white nylon rope, and reached by the water jet coming from the showerhead. The emergency team turned off the water, cut the white rope around the neck, and partially lifted the canvas bag to expose his face (Fig. ). The man’s death was declared, and the emergency team did not modify the scene any further. The police and the forensic experts team then arrived at the scene. The door and the windows did not show any signs of forced entry, and the inside of the flat was clean and ordered. In the kitchen there were only a cup of coffee and a glass of water, laying on the table. On a closer examination, the victim presented a complex system of bindings. The two handles of the canvas bag were tied up to the upper limbs, with a grip at both the axillae. Each wrist was tied with a single nylon rope (Fig. ). In particular, the two nylon ropes were firmly rolled up in several loops all around his wrists and his hands. The left forearm was set on the back, the left hand close to the right hip. In this way, the hands could be placed near to each other, with a padlock that was binding the two ropes (Fig. ). A pair of scissors and the padlock key were found beneath the body, not far away from the hands (Fig. ). The rest of the nylon ropes was found on the floor of a bedroom (Fig. ). Protective plastic bags were placed on his hands and his wrists to prevent contamination. At the scene, the rectal temperature was 32.5 °C (environmental 24.0 °C), post-mortem lividity was intense, and it partially disappeared with pressure on the back. Rigor mortis was documented at the temporomandibular joints, the neck, and the main joints of the upper and lower limbs. The post-mortem interval was calculated based on the temperature relating to the nomogram by Henssge. The time of death was limited to between 5 and 12 h before the on-site investigation. The family members were interviewed by the police. They reported that the victim did not suffer from psychiatric diseases or socio-economic difficulties. Furthermore, they reported that he had never showed suicidal proposals and/or attempts. A suicidal letter was not left by the victim. The prosecutor ordered a judicial autopsy at the Milan Institute of Legal Medicine 36 h after the on-site judicial investigation. Prior to autopsy examination, face, neck, wrists, hands, and external genitalia were swabbed in order to avoid any contaminations. Also, the free margin of the nails, the nylon ropes, the padlock, and the canvas bag were collected for a subsequent forensic genetic analysis, which was requested by the prosecutor. External examination indicated that the body was in good state of preservation (weight 70 kg, length 180 cm), with rigor mortis presented both at the neck, upper, and the lower limbs (the corpse was refrigerated). Intense post-mortem lividity was present on the back and fixed; furthermore, there were no conjunctival petechial hemorrhages. A current mark was not documented on the body. At autopsy, the body did neither show blunt force injuries nor defensive wounds. In particular, the upper limbs did not show any injuries. The hyoid bone and both the laryngeal superior cornua were undamaged. A pinkish frothy fluid was observed in the trachea and the main bronchi, but no foam was present in his mouth or his nostrils. The lungs were markedly overinflated (right = 1180 g; left = 1045 g), filling the thoracic cavity. The surface was pale and crepitant, with subpleural petechiae. The pulmonary parenchyma was waterlogged, with some areas of intrapulmonary bleedings. Furthermore, a lot of red-tinged frothy fluid exuded from the bronchi on the cut section. Upon autopsy, bilateral hemorrhages within the petrous temporal bones were observed. About 50 cc of brownish fluid material were found in the stomach, without any food traces. The heart, the abdominal viscera, and the pelvis did not show any gross lesions, and nothing else was observed upon autopsy. Viscera specimens (the brain, lungs, liver, kidneys), biological fluids (femoral and cardiac blood, bile, urine, and gastric content), hair, and nasal swabs were sampled for subsequent toxicological analyses. Samples of the brain, heart, lungs, stomach, liver, spleen, and kidneys were also collected for histopathologic examination. A specimen of psoas muscle was also sampled for forensic genetic analysis. All the analyses were authorized by the prosecutor. Regarding the forensic genetic analysis, the PCR amplification only revealed the victim’s DNA, which was compared with the sample of psoas muscle collected during the autopsy examination. Toxicological analyses were performed in accordance with the protocols adopted in the Milan Institute of Legal Medicine. Alcohol concentrations were analyzed by gas chromatography (GC) in specimens of femoral blood, gastric content, and the brain: all of them resulted to be negative. Specimens of urine and cardiac blood, tested by ELISA immunoassay, were analyzed for illicit psychotropic drugs, which were negative. In addition, no medicinal drugs and non-volatile toxic substances were found in urine, cardiac blood, or bile, which were analyzed by GC and liquid chromatography (LC). Finally, no drugs were detected in hair sample and nasal swabs. Samples of the brain, heart, lungs, stomach, liver, spleen, and kidneys underwent standard post-fixative histopathologic examination. Slides were stained with hematoxylin and eosin (HE) and Masson’s trichrome staining (MT). Histologic slides of the brain, stomach, and kidneys showed post-mortem autolytic changes. Slides of the heart revealed wavy myocardial fibers, with a moderate fibrosis of the interstitium space. The spleen showed hyperemia, while the liver showed microvesicular steatosis. The pulmonary parenchyma showed a massive edema, with some areas of acute emphysema and hemorrhagic foci (Fig. ). This latter morphological pattern can be defined as emphysema aquosum , since the edema fluid in the bronchi blocks the passive collapse that normally occurs at death, holding the lungs in the inspiratory position. The other organs did not show any abnormalities. Finally, the cause of death was identified as an asphyxiation by drowning in combination with direct suffocation caused by the soaked canvas bag, in the context of waterboarding practice. Toxic substances and natural diseases were not documented. To the best of our knowledge, waterboarding has never been used to commit suicides or homicides, but only for torturing prisoners. Therefore, waterboarding has been practiced for centuries. It was used by the Spanish Inquisition in the sixteenth century , during the Thirty Years’ War (1618–1648), by the Japanese Army during World War II, and by the Pol-Pot’s Khmer Rouge in Cambodia (1975–1978) . Since the beginning of 2000s, the Central Intelligence Agency (CIA) was authorized to use waterboarding against suspected Al-Qaeda terrorists held at the Guantanamo Bay detention camp, Cuba . As a method of torture, waterboarding became illegal under the law of war with the adoption of the third Geneva Convention of 1929, which required that war prisoners had to be treated humanely, and the third and fourth Geneva Conventions of 1949, which explicitly prohibited the torture and cruel treatments of war prisoners and civilians . In the case presented, the possibility of a waterboarding fatality occurred in the bathtub was based on several data. In particular, the victim was found completely naked in the bathtub, and the hands were firmly tied with two nylon ropes and bound with a padlock. The head was covered by a soaked canvas bag, held around the neck by a nylon rope, and reached by the water jet coming from the showerhead, which was specifically inclined to the head. The external examination did not show any injuries. In particular, signs related to blunt force injuries were not documented. Furthermore, defensive cut wounds typically involving the upper limbs were not observed. On a closer examination, the neck, the thorax, and the abdomen were free from any injuries as well as his head and his back, which are frequently involved in the event of an assault . At autopsy, the neck structures were also completely undamaged, without any hemorrhagic infiltration of the muscles. Signs related to struggle or attempted immobilization were therefore ruled out. Schmidt and Madea reported indeed that homicides committed in the bathtub or a mere deposition of the victim of a homicide in the bath is very rare events. Thus, they documented 11 homicides among 215 bathtub fatalities, in a retrospective study. In particular, 5 victims were strangulated, 4 were stabbed, and 2 showed pathological findings of asphyxiation by drowning in combination with severe miscellaneous blunt force violence, such as contusions of the skull, and hemorrhages in the soft tissues of the back and the arms. Ten victims were female, while the only male victim showed abrasions, contusions, and lacerations of the skull, with 98 stab wounds. The ages of the deceased ranged 13–63 years, and the age group 20–40 years accounted for most of the fatalities. In the case presented, toxicological analyses were all negative in reference to drugs and illicit substances. Toxicological investigations help forensic pathologists to establish whether the victim had taken medications, alcohol, or illicit drugs, which may alter the psycho-physical abilities of a healthy man, facilitating direct physical violence and mechanical asphyxiation (e.g., strangulation or smothering) as well. Drug-facilitated sexual assault (DFSA) are indeed central nervous system (CNS) depressants. Dozens of drugs (including ethanol) can be used in DFSA. γ-Hydroxybutyric acid (GHB) and flunitrazepam are the most common “date rape drugs”; other drugs include antidepressants, muscle relaxants, antihistamines, opioids, and hallucinogens, such as MDMA and ketamine . Interestingly, several DFSA, such as GHB, are also endogenous substances produced by the human body. In this concern, the analysis of multiple matrices is advisable to obtain complementary information, differentiating endogenous production from exogenous administration . Forensic genetic analysis only revealed the victim’s DNA. In addition, the police examined the security camera footage recorded in the apartment building where the victim lived. No suspicious activities were reported in the time period included within the estimation of the time of death. These findings were therefore highly suggestive of a suicidal WB fatality. The police also tried to reproduce the complex binding system of the victim. According to the police, the victim may have tied his wrists and his hands with the nylon ropes. Then, he may have put the canvas bag on his head; after that, he probably has fastened the canvas bag with the nylon rope. Finally, he might have bound with a padlock the two ropes, which were previously fastened all-around each hand, and opened the mixer tap of the shower with a knee or a foot. According to the medicolegal literature , self-tying of the hands by using very complex bindings in suicidal deaths may be possible, also a way to prevent a change of heart during the procedure, especially if the manner of death turns out to be excessively painful or agonizing. Suicidal waterboarding is therefore definable as a primary and planned complex suicide since two different independent and lethal methods are applied simultaneously . On one hand, the soaked canvas bag provokes a direct physical obstruction of the mouth and the nostrils which is augmented by the respiratory activity. On the other hand, water gradually enters the lungs and causes asphyxiation by drowning. In our opinion, the victim died quickly since his body was not in the Trendelenburg position (used during torturing purposes), which avoids water to rapidly flood the airways. Furthermore, waterboarding may have been chosen by the young victim as a self-killing method, after watching movies or tv series related to this torture technique. The authors presented the first case of suicidal waterboarding, although a clear and specific differential diagnosis with a homicide is not possible beyond any doubt. In forensic practice, this aspect is still challenging for the forensic pathologists. However, a multidisciplinary approach based on a thorough on-site investigation, autopsy examination, and laboratory analyses is highly advisable in such complex cases. |
p53 Immunohistochemistry Defines a Subset of Human Papillomavirus–Independent Penile Squamous Cell Carcinomas With Adverse Prognosis | eade0b58-5919-4334-aeac-000dd4f6c3e2 | 11472902 | Anatomy[mh] | Patients We retrospectively identified all patients surgically treated for PSCC in 2 tertiary general hospitals (Hospital Clinic de Barcelona, Vall d’Hebron Barcelona Hospital) and a monographic urological center (Fundació Puigvert) in Barcelona, Spain, from January 2000 to December 2020. All patients fulfilled the following inclusion criteria: (1) had a primary diagnosis of PSCC, (2) had a follow-up of at least 22 months or until death, and (3) had sufficient available tumor tissue for ancillary IHC studies. All patients were treated following the guidelines of the European Association of Urology depending on the clinical staging, which was determined on the basis of physical examination plus imaging techniques (ultrasound scan, computed tomography scan, and/or positron emission tomography, etc) when required. All local excisions aimed at organ sparing and reconstructive techniques were used when necessary to minimize the functional impact. Inguinal lymph node evaluation was performed if required. Guided sentinel node biopsy was the first option. Endoscopic inguinal modified lymphadenectomy was performed when sentinel node biopsy was not available. In all patients with positive sentinel node, a radical inguinal lymphadenectomy was performed. The following clinical and pathologic variables were retrieved from the electronic files: age at diagnosis, tumor location, type and date/s of treatment/s, margin status, vascular invasion, perineural invasion, stage at diagnosis, date of first cancer recurrence, and patient status at follow-up. The study was approved by the Healthcare Ethics Committee of the Hospital Clinic of Barcelona, Hospital Vall d’Hebron, and Fundació Puigvert (HCB/2020/1207, PR(AG)578/2021, FP2021/05c, respectively). Informed written consent was obtained from all the patients included in the study. p16 IHC IHC for p16 was performed for all samples using the CINtec Histology Kit (clone E6H4; Roche). Tumors with strong and diffuse block-type staining were considered positive, whereas patchy or completely negative p16 staining was considered p16 negative. In each run a p16-positive squamous carcinoma of the vulva was used as positive control. All patients were independently evaluated by 2 pathologists with expertise in the interpretation of p16 staining (I.T. and N.R.). In Situ Hybridization RNA ISH was performed for all samples using the automated Leica Biosystems BOND-III and RNAscope ISH probe high-risk HPV. The assay qualitatively detects E6 mRNA in 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, 73, and 82 high-risk HPV types. In each run, carcinoma of the uterine cervix with known HPV16 positivity was used as a control. p53 IHC p53 IHC was performed in all patients with a monoclonal antibody (clone DO-7; Roche) on an automated staining system (Ventana Benchmark ULTRA, Ventana Medical Systems). IHC staining was evaluated following the recently described p53 pattern-based interpretation framework described for squamous cell carcinomas of the vulva and recently confirmed in PSCC; this method includes 2 major categories: “normal,” which correlates with wild-type TP53 , and “abnormal staining,” which correlates with mutated TP53. The “normal” category included 2 patterns: (1) occasional positive nuclei in the basal and/or parabasal layer (scattered pattern) and (2) moderate to strong nuclear p53 IHC staining in the parabasal layers with absence of expression in the basal cells (mid-epithelial pattern). The “abnormal” category included four p53 IHC patterns: (1) continuous, strong nuclear staining of the basal layer (basal overexpression pattern), (2) continuous and strong nuclear basal staining with suprabasal extension (diffuse overexpression pattern), (3) cytoplasmic staining with or without nuclear positivity (cytoplasmic pattern), and (4) complete absence of staining in the tumor (null pattern), with evidence of intrinsic positive control (positive staining in adjacent inflammatory and stromal cells). All patients were independently evaluated by 2 pathologists with expertise in the interpretation of p53 staining (I.T. and N.R.). All discrepancies were discussed in a consensus meeting, and a final evaluation was achieved. In each run a normal tonsil showing scattered positive staining and a serous carcinoma of the ovary with diffuse p53 IHC overexpression were used as controls. Thirty-three patients were previously included in a recent study focused on the validation of this pattern-based p53 interpretation framework against TP53 mutational analysis, in which 95% concordance was observed. The Criteria for PSCC Classification Into Three Groups All the study cases were classified into 3 main categories based on HPV ISH and p16 IHC results and the pattern of p53 IHC. The categories included the following: (1) HPV-associated PSCC (positive for HPV ISH and p16 IHC, independent of the p53 IHC pattern), (2) HPV-independent/p53 normal PSCC (negative for HPV ISH and p16 IHC and a scattered or mid-epithelial p53 IHC pattern), and (3) HPV-independent/p53 abnormal PSCC (negative for HPV ISH and p16 IHC and diffuse, basal overexpression, cytoplasmic or null patterns of p53 IHC). Statistical Analyses The statistical analyses were conducted using R Statistical Software (v4.3.2; R Core Team 2021). The χ 2 test and Fisher exact test were employed for categorical data, whereas the Wilcoxon rank-sum test was utilized for numerical data, enabling the comparison of clinical and histopathological data. The endpoints for prognosis were recurrence-free survival (RFS) and disease-specific survival (DSS), which were calculated from the date of treatment (primary surgery) to the date of first recurrence or progression or to death due to the disease, respectively. Cumulative incidences were depicted through plotted curves, and differences between the curves were assessed using a Gray test. Univariate and adjusted (multivariate) models were obtained using the Cox proportional hazards model. For the multivariate analysis, 2 models were built, one including the molecular type and the second including the p53 IHC status, due to the collinearity of these 2 variables. Two-sided tests were used, and a P value <0.05 indicated statistical significance. We retrospectively identified all patients surgically treated for PSCC in 2 tertiary general hospitals (Hospital Clinic de Barcelona, Vall d’Hebron Barcelona Hospital) and a monographic urological center (Fundació Puigvert) in Barcelona, Spain, from January 2000 to December 2020. All patients fulfilled the following inclusion criteria: (1) had a primary diagnosis of PSCC, (2) had a follow-up of at least 22 months or until death, and (3) had sufficient available tumor tissue for ancillary IHC studies. All patients were treated following the guidelines of the European Association of Urology depending on the clinical staging, which was determined on the basis of physical examination plus imaging techniques (ultrasound scan, computed tomography scan, and/or positron emission tomography, etc) when required. All local excisions aimed at organ sparing and reconstructive techniques were used when necessary to minimize the functional impact. Inguinal lymph node evaluation was performed if required. Guided sentinel node biopsy was the first option. Endoscopic inguinal modified lymphadenectomy was performed when sentinel node biopsy was not available. In all patients with positive sentinel node, a radical inguinal lymphadenectomy was performed. The following clinical and pathologic variables were retrieved from the electronic files: age at diagnosis, tumor location, type and date/s of treatment/s, margin status, vascular invasion, perineural invasion, stage at diagnosis, date of first cancer recurrence, and patient status at follow-up. The study was approved by the Healthcare Ethics Committee of the Hospital Clinic of Barcelona, Hospital Vall d’Hebron, and Fundació Puigvert (HCB/2020/1207, PR(AG)578/2021, FP2021/05c, respectively). Informed written consent was obtained from all the patients included in the study. IHC for p16 was performed for all samples using the CINtec Histology Kit (clone E6H4; Roche). Tumors with strong and diffuse block-type staining were considered positive, whereas patchy or completely negative p16 staining was considered p16 negative. In each run a p16-positive squamous carcinoma of the vulva was used as positive control. All patients were independently evaluated by 2 pathologists with expertise in the interpretation of p16 staining (I.T. and N.R.). RNA ISH was performed for all samples using the automated Leica Biosystems BOND-III and RNAscope ISH probe high-risk HPV. The assay qualitatively detects E6 mRNA in 16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58, 59, 66, 68, 73, and 82 high-risk HPV types. In each run, carcinoma of the uterine cervix with known HPV16 positivity was used as a control. p53 IHC was performed in all patients with a monoclonal antibody (clone DO-7; Roche) on an automated staining system (Ventana Benchmark ULTRA, Ventana Medical Systems). IHC staining was evaluated following the recently described p53 pattern-based interpretation framework described for squamous cell carcinomas of the vulva and recently confirmed in PSCC; this method includes 2 major categories: “normal,” which correlates with wild-type TP53 , and “abnormal staining,” which correlates with mutated TP53. The “normal” category included 2 patterns: (1) occasional positive nuclei in the basal and/or parabasal layer (scattered pattern) and (2) moderate to strong nuclear p53 IHC staining in the parabasal layers with absence of expression in the basal cells (mid-epithelial pattern). The “abnormal” category included four p53 IHC patterns: (1) continuous, strong nuclear staining of the basal layer (basal overexpression pattern), (2) continuous and strong nuclear basal staining with suprabasal extension (diffuse overexpression pattern), (3) cytoplasmic staining with or without nuclear positivity (cytoplasmic pattern), and (4) complete absence of staining in the tumor (null pattern), with evidence of intrinsic positive control (positive staining in adjacent inflammatory and stromal cells). All patients were independently evaluated by 2 pathologists with expertise in the interpretation of p53 staining (I.T. and N.R.). All discrepancies were discussed in a consensus meeting, and a final evaluation was achieved. In each run a normal tonsil showing scattered positive staining and a serous carcinoma of the ovary with diffuse p53 IHC overexpression were used as controls. Thirty-three patients were previously included in a recent study focused on the validation of this pattern-based p53 interpretation framework against TP53 mutational analysis, in which 95% concordance was observed. All the study cases were classified into 3 main categories based on HPV ISH and p16 IHC results and the pattern of p53 IHC. The categories included the following: (1) HPV-associated PSCC (positive for HPV ISH and p16 IHC, independent of the p53 IHC pattern), (2) HPV-independent/p53 normal PSCC (negative for HPV ISH and p16 IHC and a scattered or mid-epithelial p53 IHC pattern), and (3) HPV-independent/p53 abnormal PSCC (negative for HPV ISH and p16 IHC and diffuse, basal overexpression, cytoplasmic or null patterns of p53 IHC). The statistical analyses were conducted using R Statistical Software (v4.3.2; R Core Team 2021). The χ 2 test and Fisher exact test were employed for categorical data, whereas the Wilcoxon rank-sum test was utilized for numerical data, enabling the comparison of clinical and histopathological data. The endpoints for prognosis were recurrence-free survival (RFS) and disease-specific survival (DSS), which were calculated from the date of treatment (primary surgery) to the date of first recurrence or progression or to death due to the disease, respectively. Cumulative incidences were depicted through plotted curves, and differences between the curves were assessed using a Gray test. Univariate and adjusted (multivariate) models were obtained using the Cox proportional hazards model. For the multivariate analysis, 2 models were built, one including the molecular type and the second including the p53 IHC status, due to the collinearity of these 2 variables. Two-sided tests were used, and a P value <0.05 indicated statistical significance. Clinical Pathologic Features of the Overall Series One hundred twenty-two patients were included in the study. Of these, 43 were from the Hospital Clínic de Barcelona, 35 were from the Hospital Vall d’Hebron, and 44 were from the Fundació Puigvert. The mean age at diagnosis was 68.6 years (range: 40 to 96). Fifty-eight patients (47.5%) were stage I at diagnosis, 40 (32.8%) were stage II, 16 (13.1%) were stage III, and 8 (6.6%) were stage IV tumors. The median follow-up period was 56.9 months (range: 22 to 60 months). Sixty-five patients (53.3%) underwent penectomy (partial or radical), 48 (39.3%) glansectomy, and in 9 patients (7.4%) circumcision was performed. Metastatic involvement of the lymph nodes was identified in 24 patients (19.7%). HPV ISH and p16 and p53 IHC Results and Tumor Classification HPV ISH was positive in 36/122 tumors (29.5%), and in all of them, p16 IHC was positive. These tumors were classified accordingly as HPV-associated tumors. Thirty-two of the 36 HPV-associated tumors had a normal pattern of p53 IHC expression (88.9%), 31 had a scattered pattern, and 1 had a mid-epithelial pattern of p53 IHC. Only 4 (11.1%) HPV-associated tumors presented an abnormal p53: 2 had diffuse overexpression, 1 had basal overexpression, and 1 had a null pattern. Eighty-six out of the 122 tumors (70.5%) were negative for HPV ISH and p16 IHC and were classified as HPV-independent. Among the 86 tumors, 35 (28.7% of the overall series) showed a normal p53 IHC and were classified as HPV-independent/p53 normal, all of which exhibited a scattered pattern. Fifty-one tumors in this HPV-independent category (41.8% of the overall series) had an abnormal p53 IHC and were classified as HPV-independent/p53 abnormal. The most common abnormal p53 IHC pattern in this cohort was diffuse overexpression (27/51, 53.0%), followed by a null pattern (14 patients, 27.4%), basal overexpression (8 patients, 15.7%), and cytoplasmic expression (2/51, 3.9%). Characteristics of the Three Molecular Types of PSCC Table summarizes the clinical and pathologic features of the patients classified into the 3 molecular categories defined in this study. Patients with HPV-independent/p53 normal tumors were older (mean age of 72 years) than those with the other 2 categories (67 years for both HPV-associated and HPV-independent/p53 abnormal tumors [ P = 0.040]). Patients with HPV-independent/p53 abnormal tumors had a greater risk of lymph node metastases than patients with HPV-independent/p53 normal tumors ( P = 0.012). The histologic variants of the HPV-associated tumors significantly differed from the variants identified in the HPV-independent molecular types. Moreover, there were no differences in terms of histologic variants between HPV-independent/p53 normal and HPV-independent/p53 abnormal tumors. No differences were observed in terms of anatomic location, vascular or perineural invasion, margin status, or stage at diagnosis. Figure shows a representative example of each of the 3 tumor categories, including hematoxylin and eosin staining features, as well as HPV ISH and p16 and p53 IHC staining. Survival Analysis Thirty patients (24.6%) experienced disease recurrence during follow-up, but no differences were observed among the 3 molecular types in terms of recurrence rate ( P = 0.067). Seventeen patients (13.9%) died due to PSCC, and 13 (10.6%) died due to other causes. Disease-related death was observed in 3/36 (8.3%) patients with HPV-associated PSCC and 0/35 (0.0%) patients with HPV-independent/p53 normal PSCC, with the highest number of events occurring in patients with HPV-independent/p53 abnormal PSCC (14/51 [27.5%]; P < 0.001). The patterns of p53 IHC in the HPV-independent tumors of the patients who died due to the tumor were diffuse overexpression (7/27; 25.9%), null pattern (5/14; 35.7%), basal overexpression (1/8; 12.5%), and cytoplasmic expression (1/2; 50%). Figure shows the cumulative incidence curves for RFS and DSS for the 3 molecular categories. No differences in RFS were observed among the 3 molecular types ( P = 0.083); however, significant differences in DSS were detected ( P = 0.001), with patients with HPV-independent/p53 abnormal PSCC having the worst survival outcomes. In terms of tumor staging, no differences in RFS were observed between patients with early-stage tumors and patients with advanced-stage tumors ( P = 0.073); however, significant differences in DSS were identified ( P < 0.001). Remarkably, 10/14 (71.4%) patients with HPV-independent/p53 abnormal PSCC diagnosed with stage III or IV died from the disease compared with only 2/8 (25.0%) of the patients with HPV-associated tumors in stages III or IV and 0/2 (0.0%) of the HPV-independent/p53 normal patients in stages III or IV ( P = 0.048). Table shows the Cox regression analysis for RFS. Vascular invasion, perineural invasion, lymph node metastasis, and advanced disease stage were associated with impaired RFS in the univariate analysis. According to the multivariate analysis, only vascular invasion reached statistical significance. The results of the Cox regression analysis for DSS are shown in Table . The molecular type (HPV-independent/p53 abnormal), p53 IHC abnormal pattern, vascular invasion, perineural invasion, lymph node metastases, and advanced stages were associated with impaired DSS in the univariate analysis. The 2 multivariate models showed that HPV-independent/p53 abnormal molecular type ( P = 0.001) or p53 IHC expression ( P = 0.001), in addition to vascular and perineural invasion, lymph node metastases, and advanced-stage, were significantly associated with impaired DSS. One hundred twenty-two patients were included in the study. Of these, 43 were from the Hospital Clínic de Barcelona, 35 were from the Hospital Vall d’Hebron, and 44 were from the Fundació Puigvert. The mean age at diagnosis was 68.6 years (range: 40 to 96). Fifty-eight patients (47.5%) were stage I at diagnosis, 40 (32.8%) were stage II, 16 (13.1%) were stage III, and 8 (6.6%) were stage IV tumors. The median follow-up period was 56.9 months (range: 22 to 60 months). Sixty-five patients (53.3%) underwent penectomy (partial or radical), 48 (39.3%) glansectomy, and in 9 patients (7.4%) circumcision was performed. Metastatic involvement of the lymph nodes was identified in 24 patients (19.7%). HPV ISH was positive in 36/122 tumors (29.5%), and in all of them, p16 IHC was positive. These tumors were classified accordingly as HPV-associated tumors. Thirty-two of the 36 HPV-associated tumors had a normal pattern of p53 IHC expression (88.9%), 31 had a scattered pattern, and 1 had a mid-epithelial pattern of p53 IHC. Only 4 (11.1%) HPV-associated tumors presented an abnormal p53: 2 had diffuse overexpression, 1 had basal overexpression, and 1 had a null pattern. Eighty-six out of the 122 tumors (70.5%) were negative for HPV ISH and p16 IHC and were classified as HPV-independent. Among the 86 tumors, 35 (28.7% of the overall series) showed a normal p53 IHC and were classified as HPV-independent/p53 normal, all of which exhibited a scattered pattern. Fifty-one tumors in this HPV-independent category (41.8% of the overall series) had an abnormal p53 IHC and were classified as HPV-independent/p53 abnormal. The most common abnormal p53 IHC pattern in this cohort was diffuse overexpression (27/51, 53.0%), followed by a null pattern (14 patients, 27.4%), basal overexpression (8 patients, 15.7%), and cytoplasmic expression (2/51, 3.9%). Table summarizes the clinical and pathologic features of the patients classified into the 3 molecular categories defined in this study. Patients with HPV-independent/p53 normal tumors were older (mean age of 72 years) than those with the other 2 categories (67 years for both HPV-associated and HPV-independent/p53 abnormal tumors [ P = 0.040]). Patients with HPV-independent/p53 abnormal tumors had a greater risk of lymph node metastases than patients with HPV-independent/p53 normal tumors ( P = 0.012). The histologic variants of the HPV-associated tumors significantly differed from the variants identified in the HPV-independent molecular types. Moreover, there were no differences in terms of histologic variants between HPV-independent/p53 normal and HPV-independent/p53 abnormal tumors. No differences were observed in terms of anatomic location, vascular or perineural invasion, margin status, or stage at diagnosis. Figure shows a representative example of each of the 3 tumor categories, including hematoxylin and eosin staining features, as well as HPV ISH and p16 and p53 IHC staining. Thirty patients (24.6%) experienced disease recurrence during follow-up, but no differences were observed among the 3 molecular types in terms of recurrence rate ( P = 0.067). Seventeen patients (13.9%) died due to PSCC, and 13 (10.6%) died due to other causes. Disease-related death was observed in 3/36 (8.3%) patients with HPV-associated PSCC and 0/35 (0.0%) patients with HPV-independent/p53 normal PSCC, with the highest number of events occurring in patients with HPV-independent/p53 abnormal PSCC (14/51 [27.5%]; P < 0.001). The patterns of p53 IHC in the HPV-independent tumors of the patients who died due to the tumor were diffuse overexpression (7/27; 25.9%), null pattern (5/14; 35.7%), basal overexpression (1/8; 12.5%), and cytoplasmic expression (1/2; 50%). Figure shows the cumulative incidence curves for RFS and DSS for the 3 molecular categories. No differences in RFS were observed among the 3 molecular types ( P = 0.083); however, significant differences in DSS were detected ( P = 0.001), with patients with HPV-independent/p53 abnormal PSCC having the worst survival outcomes. In terms of tumor staging, no differences in RFS were observed between patients with early-stage tumors and patients with advanced-stage tumors ( P = 0.073); however, significant differences in DSS were identified ( P < 0.001). Remarkably, 10/14 (71.4%) patients with HPV-independent/p53 abnormal PSCC diagnosed with stage III or IV died from the disease compared with only 2/8 (25.0%) of the patients with HPV-associated tumors in stages III or IV and 0/2 (0.0%) of the HPV-independent/p53 normal patients in stages III or IV ( P = 0.048). Table shows the Cox regression analysis for RFS. Vascular invasion, perineural invasion, lymph node metastasis, and advanced disease stage were associated with impaired RFS in the univariate analysis. According to the multivariate analysis, only vascular invasion reached statistical significance. The results of the Cox regression analysis for DSS are shown in Table . The molecular type (HPV-independent/p53 abnormal), p53 IHC abnormal pattern, vascular invasion, perineural invasion, lymph node metastases, and advanced stages were associated with impaired DSS in the univariate analysis. The 2 multivariate models showed that HPV-independent/p53 abnormal molecular type ( P = 0.001) or p53 IHC expression ( P = 0.001), in addition to vascular and perineural invasion, lymph node metastases, and advanced-stage, were significantly associated with impaired DSS. The most remarkable finding of our study, which included a large series of patients with PSCC treated at 3 different institutions in Barcelona, Spain, was the difference in prognosis observed among the 3 molecular types of PSCC defined according to their association with HPV and p53 IHC: HPV-associated, HPV-independent with normal p53, and HPV-independent with abnormal p53 PSCC. Remarkably, the classification of the tumors based on HPV status and p53 IHC patterns had a stronger impact on DSS in the multivariate analysis than did the staging system, suggesting that not only HPV status but also p53 IHC should be routinely evaluated in all patients with PSCC. The good prognosis of patients with HPV-associated tumors (over 90% DSS at 5 years) strongly supports the current 2022 WHO classification of PSCC, which separates tumors based on HPV status. Studies on the prognostic impact of HPV status in patients with PSCC have shown controversial results, with some reporting no differences in DSS, and others showing longer DSS in HPV-associated PSCC, , in similarity to what occurs in HPV-associated carcinomas in other anatomic sites, such as the head and neck. The good DSS of patients with HPV-associated tumors in our study is remarkable, considering that these patients were frequently diagnosed in advanced stages and 25% of them had metastatic involvement of the lymph nodes. These results suggest that, as shown in HPV-associated tumors from other anatomic areas, HPV-associated PSCC is highly sensitive to radiation and chemotherapy. In accordance with the findings of other European series, HPV-associated tumors represented a small percentage (29.5%) of all PSCC. As previously reported, p16 IHC results have shown excellent correlation with HPV ISH, reinforcing the validity of the recent WHO 2022 recommendation of using p16 IHC as a surrogate for the presence of high-risk HPV. Our study revealed that the second type of PSCC defined by the WHO, HPV-independent tumors, includes at least 2 categories with different clinical and pathologic features and, most importantly, a different prognosis. In the first category, HPV-independent PSCC with normal p53 expression is associated with several specific clinical pathologic features. These tumors arise in older men, have a very low rate of lymph node metastases, and are rarely diagnosed at stage III or IV. Remarkably, although these patients had similar rates of recurrence compared with those in the other two groups, they had excellent DSS, with no tumor-related deaths. This category of HPV-independent tumors with normal p53 IHC has previously been described in the vulva, , where they show similar behavior to that observed in our series of PSCC, with frequent recurrences but extremely good DSS. The most frequent category of PSCC in our study (60% of all HPV-independent tumors and 40% of all tumors) was HPV-independent/p53 abnormal PSCC. This percentage of abnormal p53 IHC results is similar to the percentage reported by other studies in HPV-independent PSCC. , In contrast with the favorable DSS of patients with HPV-associated tumors and HPV-independent/p53 normal tumors, the prognosis of patients with HPV-independent/p53 abnormal PSCC is poor, with a 27% 5-year mortality. Importantly, the impaired DSS of this subgroup was confirmed through multivariate analysis. Our study confirmed that the pattern-based p53 IHC evaluation significantly improves the conventional evaluation of p53 IHC. In addition, this framework recognizes additional abnormal p53 patterns (null, cytoplasmic, and basal overexpression) usually misinterpreted as p53 normal (wild-type). As shown in this study, these patterns correlated with an adverse prognosis, as 7/14 deaths in the HPV-independent/p53 abnormal group were related to tumors showing p53 patterns not recognized through classic p53 IHC. These differences in p53 IHC results may explain the differences observed in previous studies regarding the prognostic impact of p53 IHC. Although the vast majority of HPV-associated PSCC showed a normal pattern of p53 expression, a small percentage of patients in our series (11%) exhibited an abnormal p53 IHC pattern. This finding has been previously described. Moreover, TP53 mutations have been detected in small percentages of HPV-associated PSCC , and in HPV-associated tumors of other sites, , indicating that abnormal TP53 (or abnormal p53 IHC) is not an exclusive finding of HPV-independent carcinomas. Interestingly, none of the patients with HPV-associated PSCC with an abnormal p53 IHC expression pattern died due to the disease, suggesting that TP53 mutation does not impair prognosis in this molecular category, although further studies including a greater number of HPV-associated tumors with abnormal p53 IHC staining are needed to reach strong conclusions. Finally, it should be emphasized that although in this study the correlation between p16 IHC overexpression and HPV detection was 100%, a percentage of around 10% of discrepant results has been reported in other studies focused on head and neck and vulvar tumors and that this phenomenon should also be expected as probably occurring in PSCC. Interestingly, the presence of vascular invasion was the only factor associated with disease recurrence according to the multivariate analysis. This association is not surprising considering the previously reported association between vascular invasion and lymph node metastases in PSCC. Our study has several limitations. Due to the large time frame of inclusion, a significant number of patients, mainly from the initial period of inclusion, did not undergo inguinal staging using sentinel lymph node analysis; thus, some inguinal lymph node microscopic metastases could have been missed. Secondly, the results could not be corrected for the different treatments due to the small number of patients requiring adjuvant therapies. Thirdly, the small number of disease-related deaths in the series might have affected the strength of the statistical estimations. Finally, the pattern-based evaluation of p53 IHC has not been extensively validated in alternative PSCC cohorts. Thus, the results of this study should be confirmed in larger, more homogeneous cohorts. We showed that patients with HPV-independent/p53 abnormal PSCC have adverse clinical outcomes than patients with HPV-associated and HPV-independent/p53 normal PSCC. p53 IHC defines 2 prognostic categories in HPV-independent PSCC: HPV-independent/p53 normal PSCC are low-risk tumors, whereas HPV-independent/p53 abnormal tumors can be considered aggressive neoplasms. Our study suggests that PSCC be stratified into 3 molecular types with distinct clinicopathological features and behaviors based on p16 (as a surrogate of HPV status) and p53 IHC (as a surrogate of TP53 mutational status) status. If these results are confirmed in prospective studies, they could help to refine the staging work-up, treatment schemes, and follow-up strategies for patients with PSCC. |
Comparative analysis between Reverdin-Isham Osteotomy (RIO) and minimally invasive intramedullary nail device (MIIND) in association with AKIN osteotomy for Hallux valgus correction | a9e5a211-ff17-46d9-8c2c-6a750d870282 | 11844019 | Surgical Procedures, Operative[mh] | Hallux valgus is a common forefoot deformity that causes pain, discomfort and difficulty walking, leading to decreased quality of life. The estimated prevalence of this condition is 19% in the general population, but it increases to 22.7% in individuals aged 60 or above, and it is more frequent in women (23.74% vs. 11.43% in males) . The aetiology is multifactorial, involving intrinsic and extrinsic causes, even if not totally understood . It is described as the progressive abduction and pronation of the first phalanx, adduction, pronation and elevation of the first metatarsal bone (MB), the lateral capsular retraction of the first metatarsophalangeal joint and dislocation of the sesamoids . Surgical correction is the standard treatment of symptomatic HV and seems to be more effective than nonoperative methods . More than 400 different surgical techniques have been described for HV correction, from open traditional procedures to percutaneous ones, consisting of various types of osteotomies at different levels of the first MB . Minimally invasive surgery (MIS) and percutaneous techniques are becoming popular because of the good clinical and radiographic outcomes, smaller scars, lower postoperative pain, immediate weight bearing and shorter recovery . Among percutaneous techniques, the Reverdin-Isham osteotomy (RIO), in combination with the Akin osteotomy, is performed with lateral soft tissue release and without internal fixation to correct mild-to-moderate deformity . The Minimally Invasive Intramedullary Nail Device (MIIND) consists of a curvilinear cylindrical titanium body and a blade producing a progressive lateral displacement of the first metatarsal head (MTH). It is preferably indicated to correct moderate-to-severe HV, allowing multiplanar correction of the deformity and the anatomic reduction of the sesamoids without performing a lateral release . Only a few trials have compared percutaneous versus minimally invasive procedures, reporting no differences . However, these studies had small sample sizes and relatively short follow-ups. This study aimed to compare the clinical and radiographic outcomes at long-term follow-up of patients surgically treated for painful HV using the RIO and the MIIND in association with Akin osteotomy. Study design A retrospective, comparative, observational, single-centre cohort study of consecutive patients diagnosed with mild-to-severe HV was performed. After providing written informed consent, patients were enrolled from January 2014 to December 2018. The Local Ethics Committee approved the study (4064/AO/17) that was carried out in accordance with the ethical standards from the Declaration of Helsinki, revised in 2024. The inclusion criteria were as follows: patients between 18 and 80 years old with a diagnosis of HV with constant pain in the area of the first MTH not extending to other metatarsals, having particular discomfort while wearing shoes, and undergoing unilateral RIO or MIIND procedure. The exclusion criteria were as follows: previous foot surgery or trauma, vascular insufficiency, diabetes mellitus, foot neuropathy, rheumatologic diseases, hallux rigidus, generalised joint laxity or hypermobility of the first ray, additional procedures on the lateral rays during the same operation and a follow-up less than 60 months . Patients were categorised based on the Mann and Coughlin classification for HV correction and underwent RIO or MIIND based on the severity of HV: (1) RIO + Akin for mild-moderate HV and (2) MIIND + Akin for moderate-severe HV. The same surgeon performed both techniques, leveraging his extensive experience with them, spanning nearly two decades. The choice of technique was guided by the outcomes and criteria established in previous case series. RIO was selected for mild-to-moderate HV correction, while MIIND was preferred for cases of moderate-to-severe deformity . Surgical techniques Each operative technique included prophylactic antibiotic therapy before surgery and thromboembolic prophylaxis the same evening and for 30 days. Anaesthesia consisted of conscious sedation in association with a regional ankle block of superficial and deep nerves . Both operative procedures were completed by Akin percutaneous osteotomy performed medially at the base of the proximal phalanx with medial base and lateral cortex preservation. RIO surgical technique The RIO was performed as described by De Prado through two different skin incisions and under fluoroscopy control as described by Biz et al. . Through the first incision on the medial side of the first MTH, the exostosectomy and the distal osteotomy were performed without cutting the lateral cortex. Then, a wedge burr was used to create a wedge with a medially oriented base. At the point of closing the wedge, osteoclasis of the preserved lateral cortex was achieved, modifying the orientation of the articular surface, normalising the distal metatarsal articular angle (DMAA) value, and adding intrinsic stability to the osteotomy by producing contact of the trabecular bone. A scalpel was introduced through a second skin incision on the first metatarsal space to perform lateral soft tissue release and lateral capsulotomy. Finally, a bandage was applied to maintain the correction. MIIND surgical technique The MIIND technique was conducted by providing a 3-cm dorsal-medial longitudinal incision centred on the exostosis of the first MB . A linear osteotomy at the proximal level of the metatarsal neck followed the bunionectomy. Correction of the DMAA and subluxation of the sesamoids were, then, achieved by inserting the trial nail device into the medullary cavity, with progressive lateral displacement of the MTH and its simultaneous derotation. Then, the Endolog, a curved intramedullary titanium nail (available in 3 different lengths and curvatures), was implanted and fixed to the MTH with a screw to provide angular stability . The medial angle of the metatarsal neck was regulated with a micro-saw to prevent conflict of the bone with the soft tissues and skin. Finally, a compression dressing and tape were applied with the hallux slightly hypercorrect. Postoperative protocol Postoperative treatment was standardised for both groups as previously described . Soft dressings were applied and, after suture removal at two weeks, a postoperative bandage was reduced to allow full movement of the first metatarsophalangeal joint. Starting the evening after surgery, patients could walk as much as they tolerated using a rigid, flat-soled orthopaedic shoe for 30 days. Patients were instructed to wear an interdigital silicone orthoses spacer between the first and second toe for one month to help maintain the correct position of the first ray until the osteotomy fully consolidated. Patient assessment Baseline characteristics of the patients were collected including age, gender, body mass index (BMI), smoking habits, side involved, use of narrow-tip shoes and/or high heels and family history of HV. Radiological and clinical evaluation Radiographic and clinical follow-up assessments were performed at baseline, 3, 12 months after surgery and at the last follow-up of 60 months by two orthopaedic surgeons. Intraclass Correlation Coefficients (ICCs) for continuous variables were used to quantify the agreement levels. Intra-reader and inter-reader reliability were found to be good (> 0.80) for all measurements. Radiological outcomes were evaluated using the MedStation program (Version 4.9). Hallux valgus angle (HVA), intermetatarsal angle (IMA), distal metatarsal articular angle (DMAA), and the tibial sesamoid position (TSP) were measured and categorised with regard to deformity severity (Additional File 1) . The 100-point hallux metatarsophalangeal-interphalangeal scale by the American Orthopaedic Foot and Ankle Society (AOFAS) was used to assess clinical outcomes . Both preoperatively and at the last follow-up, pain was evaluated using the Numeric Rating Scale (NRS-11) , ranging from 0 (no pain) to 10 points (worst pain); only patients reporting NRS-11 ≥ 5 were operated. Patient satisfaction was evaluated using the Visual Analogue Scale (VAS) for satisfaction, ranging from 0 (not satisfied) to 10 points (excellent result) . Clinical complications were collected. Statistical analysis An independent statistician from another institution conducted statistical analysis. Continuous data were presented as means and standard deviations, while categorical data were expressed as percentages where appropriate. To compare the two techniques, a propensity score matching (PSM) model was implemented, which is a quasi-experimental, causal inference method in which subsets of the initial populations sharing similar and comparable characteristics are artificially created, removing or at least reducing the effects of biases and confounding due to non-random treatment allocation and treatment selection influenced by the subjects’ features. In this way, the impact of the two interventions and their outcomes could be estimated by accounting for differences in baseline characteristics, and other confounders and could be compared directly . PSM was carried out employing the commercial software XLSTAT (version for Windows OS, Lumivero) using an optimal algorithm based on the Euclidean distance, with a one-to-one match in the number of matches, and 0.10 * sigma as caliper size option (Fig. ) . The propensity score was estimated using a logistic regression model in which the surgical treatment (RIO vs. MIIND) status was regressed on observed characteristics (covariates and factors). The impact of each variable was visually inspected by plotting their standardized coefficients. The quality of the model was confirmed by several indicators, including − 2 Log (Likelihood), R² according to McFadden, Cox and Snell, and Nagelkerke, Akaike Information Criterion (AIC), Schwarz Bayesian Information Criterion (SBC), and the area-under-the-curve (AUC) from the Receiver Operating Characteristic (ROC) analysis. The quality of the model was found to be excellent (Fig. ). The matching percentage was 31%, resulting in thirty subjects per group. Diagnostics, including balancing boxplots, were visually inspected to verify the effects of the matching operation on several parameters of the distribution of the propensity score within each group . Distributions were found to be comparable after the matching operation, differently from before (Fig. ). Differences between the two matched groups were computed using Student’s t-test for paired samples (or its non-parametric version) . Given the relatively low matching percentage, a sensitivity analysis was conducted using an ordinary least-squares regression mixed model for repeated measures data applied to the entire dataset, which yielded comparable results (Additional File 2). A retrospective, comparative, observational, single-centre cohort study of consecutive patients diagnosed with mild-to-severe HV was performed. After providing written informed consent, patients were enrolled from January 2014 to December 2018. The Local Ethics Committee approved the study (4064/AO/17) that was carried out in accordance with the ethical standards from the Declaration of Helsinki, revised in 2024. The inclusion criteria were as follows: patients between 18 and 80 years old with a diagnosis of HV with constant pain in the area of the first MTH not extending to other metatarsals, having particular discomfort while wearing shoes, and undergoing unilateral RIO or MIIND procedure. The exclusion criteria were as follows: previous foot surgery or trauma, vascular insufficiency, diabetes mellitus, foot neuropathy, rheumatologic diseases, hallux rigidus, generalised joint laxity or hypermobility of the first ray, additional procedures on the lateral rays during the same operation and a follow-up less than 60 months . Patients were categorised based on the Mann and Coughlin classification for HV correction and underwent RIO or MIIND based on the severity of HV: (1) RIO + Akin for mild-moderate HV and (2) MIIND + Akin for moderate-severe HV. The same surgeon performed both techniques, leveraging his extensive experience with them, spanning nearly two decades. The choice of technique was guided by the outcomes and criteria established in previous case series. RIO was selected for mild-to-moderate HV correction, while MIIND was preferred for cases of moderate-to-severe deformity . Each operative technique included prophylactic antibiotic therapy before surgery and thromboembolic prophylaxis the same evening and for 30 days. Anaesthesia consisted of conscious sedation in association with a regional ankle block of superficial and deep nerves . Both operative procedures were completed by Akin percutaneous osteotomy performed medially at the base of the proximal phalanx with medial base and lateral cortex preservation. The RIO was performed as described by De Prado through two different skin incisions and under fluoroscopy control as described by Biz et al. . Through the first incision on the medial side of the first MTH, the exostosectomy and the distal osteotomy were performed without cutting the lateral cortex. Then, a wedge burr was used to create a wedge with a medially oriented base. At the point of closing the wedge, osteoclasis of the preserved lateral cortex was achieved, modifying the orientation of the articular surface, normalising the distal metatarsal articular angle (DMAA) value, and adding intrinsic stability to the osteotomy by producing contact of the trabecular bone. A scalpel was introduced through a second skin incision on the first metatarsal space to perform lateral soft tissue release and lateral capsulotomy. Finally, a bandage was applied to maintain the correction. The MIIND technique was conducted by providing a 3-cm dorsal-medial longitudinal incision centred on the exostosis of the first MB . A linear osteotomy at the proximal level of the metatarsal neck followed the bunionectomy. Correction of the DMAA and subluxation of the sesamoids were, then, achieved by inserting the trial nail device into the medullary cavity, with progressive lateral displacement of the MTH and its simultaneous derotation. Then, the Endolog, a curved intramedullary titanium nail (available in 3 different lengths and curvatures), was implanted and fixed to the MTH with a screw to provide angular stability . The medial angle of the metatarsal neck was regulated with a micro-saw to prevent conflict of the bone with the soft tissues and skin. Finally, a compression dressing and tape were applied with the hallux slightly hypercorrect. Postoperative treatment was standardised for both groups as previously described . Soft dressings were applied and, after suture removal at two weeks, a postoperative bandage was reduced to allow full movement of the first metatarsophalangeal joint. Starting the evening after surgery, patients could walk as much as they tolerated using a rigid, flat-soled orthopaedic shoe for 30 days. Patients were instructed to wear an interdigital silicone orthoses spacer between the first and second toe for one month to help maintain the correct position of the first ray until the osteotomy fully consolidated. Baseline characteristics of the patients were collected including age, gender, body mass index (BMI), smoking habits, side involved, use of narrow-tip shoes and/or high heels and family history of HV. Radiographic and clinical follow-up assessments were performed at baseline, 3, 12 months after surgery and at the last follow-up of 60 months by two orthopaedic surgeons. Intraclass Correlation Coefficients (ICCs) for continuous variables were used to quantify the agreement levels. Intra-reader and inter-reader reliability were found to be good (> 0.80) for all measurements. Radiological outcomes were evaluated using the MedStation program (Version 4.9). Hallux valgus angle (HVA), intermetatarsal angle (IMA), distal metatarsal articular angle (DMAA), and the tibial sesamoid position (TSP) were measured and categorised with regard to deformity severity (Additional File 1) . The 100-point hallux metatarsophalangeal-interphalangeal scale by the American Orthopaedic Foot and Ankle Society (AOFAS) was used to assess clinical outcomes . Both preoperatively and at the last follow-up, pain was evaluated using the Numeric Rating Scale (NRS-11) , ranging from 0 (no pain) to 10 points (worst pain); only patients reporting NRS-11 ≥ 5 were operated. Patient satisfaction was evaluated using the Visual Analogue Scale (VAS) for satisfaction, ranging from 0 (not satisfied) to 10 points (excellent result) . Clinical complications were collected. An independent statistician from another institution conducted statistical analysis. Continuous data were presented as means and standard deviations, while categorical data were expressed as percentages where appropriate. To compare the two techniques, a propensity score matching (PSM) model was implemented, which is a quasi-experimental, causal inference method in which subsets of the initial populations sharing similar and comparable characteristics are artificially created, removing or at least reducing the effects of biases and confounding due to non-random treatment allocation and treatment selection influenced by the subjects’ features. In this way, the impact of the two interventions and their outcomes could be estimated by accounting for differences in baseline characteristics, and other confounders and could be compared directly . PSM was carried out employing the commercial software XLSTAT (version for Windows OS, Lumivero) using an optimal algorithm based on the Euclidean distance, with a one-to-one match in the number of matches, and 0.10 * sigma as caliper size option (Fig. ) . The propensity score was estimated using a logistic regression model in which the surgical treatment (RIO vs. MIIND) status was regressed on observed characteristics (covariates and factors). The impact of each variable was visually inspected by plotting their standardized coefficients. The quality of the model was confirmed by several indicators, including − 2 Log (Likelihood), R² according to McFadden, Cox and Snell, and Nagelkerke, Akaike Information Criterion (AIC), Schwarz Bayesian Information Criterion (SBC), and the area-under-the-curve (AUC) from the Receiver Operating Characteristic (ROC) analysis. The quality of the model was found to be excellent (Fig. ). The matching percentage was 31%, resulting in thirty subjects per group. Diagnostics, including balancing boxplots, were visually inspected to verify the effects of the matching operation on several parameters of the distribution of the propensity score within each group . Distributions were found to be comparable after the matching operation, differently from before (Fig. ). Differences between the two matched groups were computed using Student’s t-test for paired samples (or its non-parametric version) . Given the relatively low matching percentage, a sensitivity analysis was conducted using an ordinary least-squares regression mixed model for repeated measures data applied to the entire dataset, which yielded comparable results (Additional File 2). Patient data During the analysis period, 727 patients were operated on for painful HV using the two procedures. After applying inclusion and exclusion criteria, 217 patients were recruited, of which 21 were excluded. Hence, 196 patients were eligible and divided into two groups, according to the technique used: 98 patients by RIO and 98 by MIIND (Fig. ). Demographic characteristics of the entire cohort, divided according to the type of surgery, are reported in Table . The last follow-up was 60 months. Radiographic outcomes RIO group There were 41 (41.8%) patients with mild HV and 57 (58.2%) patients with moderate HV (Fig. ). The mean preoperative HVA was 23.36 ± 5.87° and decreased to 14.67 ± 6.43° at the last follow-up with a mean correction of 8.69° ( p < 0.0001). The mean IMA value decreased from 11.95 ± 2.60° preoperatively to 9.53 ± 2.83° at the last follow-up, with a mean correction of 2.42° ( p < 0.0001). The mean preoperative DMAA was 9.88 ± 5.68° and 9.79 ± 7.88° at the last follow-up, with a mean correction of 0.09°. The median dislocation of the medial sesamoid was 1, both preoperatively and at the last follow-up (Table ). MIIND group In this group, there were 58 (59.2%) patients with moderate HV and 40 (40.8%) with severe HV (Table ; Fig. ). The mean HVA was 35.42 ± 9.57° preoperatively and 10.50 ± 8.41° at the last follow-up, with a mean correction of 24.92° ( p < 0.0001). The mean IMA value decreased from 14.57 ± 3.12° preoperatively to 5.82 ± 3.24° at the last follow-up, with a mean correction of 8.75° ( p < 0.0001). The mean preoperative DMAA was 14.05 ± 6.02°, while it was 7.77 ± 5.96° at the last follow-up, with a mean correction of 6.28° ( p < 0.0001). The median preoperative dislocation of the medial sesamoid was 3, while its value was 0 at the last follow-up. PSM model The variables that impacted the allocation to a specific surgical technique (RIO versus MIIND) were age (OR 1.06 [95%CI 1.02–1.09], p = 0.002), preoperative HVA values (1.12 [95%CI 1.03–1.21], p = 0.005), and HV severity (OR 0.05 [95%CI 0.00-0.68] for grade 1 vs. grade 2, p = 0.024 (Table ). In other words, older patients with greater preoperative HVA values, and more severe HV were more likely to be treated with MIIND (Fig. ). Propensity score values (as logit) for each observation are reported in Additional File 3, showing the matching between “treatment” and “control” observations along with their computed distances. The matching ensures comparability between the two groups for evaluating the effectiveness of RIO and MIIND surgical techniques. Comparison between the two surgical techniques (RIO vs. MIIND) After implementing the PSM model, the two groups differed regarding IMA correction and HVA decrease. The former was significantly different between the two groups at three months (mean difference of 4.43 ± 3.54°, p < 0.0001), one year (mean difference of 4.44 ± 3.58°, p < 0.0001), and sixty months (mean difference of 4.49 ± 3.92°, p < 0.0001). The mean difference in HVA between the two groups at three months, one year, and sixty months was 5.68 ± 8.67° ( p = 0.001), 4.36 ± 9.79° ( p = 0.017), and 4.63 ± 9.84° ( p = 0.015). In contrast, no differences could be found in DMAA correction. The mean difference in DMAA between the two groups at three months, one year, and sixty months resulted to be 2.81 ± 10.40° ( p = 0.129), 2.57 ± 9.53° ( p = 0.171), and 3.36 ± 10.17° ( p = 0.084), respectively. Finally, concerning the correction of sesamoids, there was a significant difference between the two groups only at three months ( p = 0.040) but not at 1 year ( p = 0.277) or sixty months ( p = 0.151) (Table ). Clinical functional outcomes In the initial population, the mean preoperative AOFAS score was 50.29 ± 7.47 and increased to 86.52 ± 11.51 at the last follow-up (Table ). The mean preoperative AOFAS score in the RIO group was 52.63 ± 6.53 and improved to 85.23 ± 13.02 at the last follow-up ( p < 0.0001). In the MIIND group, the mean preoperative AOFAS score was 47.95 ± 7.64, and it increased over time, reaching 69.95 ± 7.43 at three months, 81.57 ± 8.55 at 12 months, and 87.80 ± 9.68 at the last follow-up ( p < 0.0001). NRS-11 decreased from 6.41 ± 1.10 preoperatively to 1.23 ± 1.36 in the RIO group ( p < 0.0001), while it decreased from 7.46 ± 1.28 to 1.31 ± 1.38 at 60 months in the MIIND group ( p < 0.0001). At the last follow-up, the mean VAS score for patient satisfaction was 6.84 ± 2.19 in the RIO group and 7.46 ± 2.13 in the MIIND group. After the implementation of the PSM algorithm, no differences in AOFAS could be computed between the two groups at three months (mean difference 1.63 ± 10.58, p = 0.428), one year (mean difference of 4.00 ± 17.18, p = 0.262) and sixty months (mean difference of 4.57 ± 19.33, p = 0.232). Similarly, no differences in NRS-11 (mean difference of 0.13 ± 2.05, p = 0.792) or in patient satisfaction at 60 months could be detected (with a mean difference of 1.07 ± 3.56, p = 0.091) (Table ). Complications Major complications (13 patients, 6.63%) included 8 cases of recurrence and one case of severe stiffness (ROM < 30°) in the RIO group; 5 cases of recurrence were observed at the last follow-up in the MIIND group. Minor complications (37 patients, 18.88%) included a slight loss of normal range of MTP joint motion (ROM 30°-74°) in 21 cases of the RIO group and 11 in the MIIND group, respectively. Furthermore, there were superficial wound infections in 4 patients in the MIIND group that were treated successfully with antibiotic therapy and one case of delayed wound healing because of portal burns during the RIO procedure. During the analysis period, 727 patients were operated on for painful HV using the two procedures. After applying inclusion and exclusion criteria, 217 patients were recruited, of which 21 were excluded. Hence, 196 patients were eligible and divided into two groups, according to the technique used: 98 patients by RIO and 98 by MIIND (Fig. ). Demographic characteristics of the entire cohort, divided according to the type of surgery, are reported in Table . The last follow-up was 60 months. RIO group There were 41 (41.8%) patients with mild HV and 57 (58.2%) patients with moderate HV (Fig. ). The mean preoperative HVA was 23.36 ± 5.87° and decreased to 14.67 ± 6.43° at the last follow-up with a mean correction of 8.69° ( p < 0.0001). The mean IMA value decreased from 11.95 ± 2.60° preoperatively to 9.53 ± 2.83° at the last follow-up, with a mean correction of 2.42° ( p < 0.0001). The mean preoperative DMAA was 9.88 ± 5.68° and 9.79 ± 7.88° at the last follow-up, with a mean correction of 0.09°. The median dislocation of the medial sesamoid was 1, both preoperatively and at the last follow-up (Table ). MIIND group In this group, there were 58 (59.2%) patients with moderate HV and 40 (40.8%) with severe HV (Table ; Fig. ). The mean HVA was 35.42 ± 9.57° preoperatively and 10.50 ± 8.41° at the last follow-up, with a mean correction of 24.92° ( p < 0.0001). The mean IMA value decreased from 14.57 ± 3.12° preoperatively to 5.82 ± 3.24° at the last follow-up, with a mean correction of 8.75° ( p < 0.0001). The mean preoperative DMAA was 14.05 ± 6.02°, while it was 7.77 ± 5.96° at the last follow-up, with a mean correction of 6.28° ( p < 0.0001). The median preoperative dislocation of the medial sesamoid was 3, while its value was 0 at the last follow-up. PSM model The variables that impacted the allocation to a specific surgical technique (RIO versus MIIND) were age (OR 1.06 [95%CI 1.02–1.09], p = 0.002), preoperative HVA values (1.12 [95%CI 1.03–1.21], p = 0.005), and HV severity (OR 0.05 [95%CI 0.00-0.68] for grade 1 vs. grade 2, p = 0.024 (Table ). In other words, older patients with greater preoperative HVA values, and more severe HV were more likely to be treated with MIIND (Fig. ). Propensity score values (as logit) for each observation are reported in Additional File 3, showing the matching between “treatment” and “control” observations along with their computed distances. The matching ensures comparability between the two groups for evaluating the effectiveness of RIO and MIIND surgical techniques. There were 41 (41.8%) patients with mild HV and 57 (58.2%) patients with moderate HV (Fig. ). The mean preoperative HVA was 23.36 ± 5.87° and decreased to 14.67 ± 6.43° at the last follow-up with a mean correction of 8.69° ( p < 0.0001). The mean IMA value decreased from 11.95 ± 2.60° preoperatively to 9.53 ± 2.83° at the last follow-up, with a mean correction of 2.42° ( p < 0.0001). The mean preoperative DMAA was 9.88 ± 5.68° and 9.79 ± 7.88° at the last follow-up, with a mean correction of 0.09°. The median dislocation of the medial sesamoid was 1, both preoperatively and at the last follow-up (Table ). In this group, there were 58 (59.2%) patients with moderate HV and 40 (40.8%) with severe HV (Table ; Fig. ). The mean HVA was 35.42 ± 9.57° preoperatively and 10.50 ± 8.41° at the last follow-up, with a mean correction of 24.92° ( p < 0.0001). The mean IMA value decreased from 14.57 ± 3.12° preoperatively to 5.82 ± 3.24° at the last follow-up, with a mean correction of 8.75° ( p < 0.0001). The mean preoperative DMAA was 14.05 ± 6.02°, while it was 7.77 ± 5.96° at the last follow-up, with a mean correction of 6.28° ( p < 0.0001). The median preoperative dislocation of the medial sesamoid was 3, while its value was 0 at the last follow-up. The variables that impacted the allocation to a specific surgical technique (RIO versus MIIND) were age (OR 1.06 [95%CI 1.02–1.09], p = 0.002), preoperative HVA values (1.12 [95%CI 1.03–1.21], p = 0.005), and HV severity (OR 0.05 [95%CI 0.00-0.68] for grade 1 vs. grade 2, p = 0.024 (Table ). In other words, older patients with greater preoperative HVA values, and more severe HV were more likely to be treated with MIIND (Fig. ). Propensity score values (as logit) for each observation are reported in Additional File 3, showing the matching between “treatment” and “control” observations along with their computed distances. The matching ensures comparability between the two groups for evaluating the effectiveness of RIO and MIIND surgical techniques. After implementing the PSM model, the two groups differed regarding IMA correction and HVA decrease. The former was significantly different between the two groups at three months (mean difference of 4.43 ± 3.54°, p < 0.0001), one year (mean difference of 4.44 ± 3.58°, p < 0.0001), and sixty months (mean difference of 4.49 ± 3.92°, p < 0.0001). The mean difference in HVA between the two groups at three months, one year, and sixty months was 5.68 ± 8.67° ( p = 0.001), 4.36 ± 9.79° ( p = 0.017), and 4.63 ± 9.84° ( p = 0.015). In contrast, no differences could be found in DMAA correction. The mean difference in DMAA between the two groups at three months, one year, and sixty months resulted to be 2.81 ± 10.40° ( p = 0.129), 2.57 ± 9.53° ( p = 0.171), and 3.36 ± 10.17° ( p = 0.084), respectively. Finally, concerning the correction of sesamoids, there was a significant difference between the two groups only at three months ( p = 0.040) but not at 1 year ( p = 0.277) or sixty months ( p = 0.151) (Table ). In the initial population, the mean preoperative AOFAS score was 50.29 ± 7.47 and increased to 86.52 ± 11.51 at the last follow-up (Table ). The mean preoperative AOFAS score in the RIO group was 52.63 ± 6.53 and improved to 85.23 ± 13.02 at the last follow-up ( p < 0.0001). In the MIIND group, the mean preoperative AOFAS score was 47.95 ± 7.64, and it increased over time, reaching 69.95 ± 7.43 at three months, 81.57 ± 8.55 at 12 months, and 87.80 ± 9.68 at the last follow-up ( p < 0.0001). NRS-11 decreased from 6.41 ± 1.10 preoperatively to 1.23 ± 1.36 in the RIO group ( p < 0.0001), while it decreased from 7.46 ± 1.28 to 1.31 ± 1.38 at 60 months in the MIIND group ( p < 0.0001). At the last follow-up, the mean VAS score for patient satisfaction was 6.84 ± 2.19 in the RIO group and 7.46 ± 2.13 in the MIIND group. After the implementation of the PSM algorithm, no differences in AOFAS could be computed between the two groups at three months (mean difference 1.63 ± 10.58, p = 0.428), one year (mean difference of 4.00 ± 17.18, p = 0.262) and sixty months (mean difference of 4.57 ± 19.33, p = 0.232). Similarly, no differences in NRS-11 (mean difference of 0.13 ± 2.05, p = 0.792) or in patient satisfaction at 60 months could be detected (with a mean difference of 1.07 ± 3.56, p = 0.091) (Table ). Major complications (13 patients, 6.63%) included 8 cases of recurrence and one case of severe stiffness (ROM < 30°) in the RIO group; 5 cases of recurrence were observed at the last follow-up in the MIIND group. Minor complications (37 patients, 18.88%) included a slight loss of normal range of MTP joint motion (ROM 30°-74°) in 21 cases of the RIO group and 11 in the MIIND group, respectively. Furthermore, there were superficial wound infections in 4 patients in the MIIND group that were treated successfully with antibiotic therapy and one case of delayed wound healing because of portal burns during the RIO procedure. Nowadays, several surgical techniques are used to treat HV, but the current literature does not provide consensus on which technique for the treatment of HV leads to the best outcomes . While many studies have evaluated a specific percutaneous or MIS technique or have compared various MIS techniques with one another or with open techniques , to date, a comparison between RIO and MIIND techniques has only been performed in a small cohort (40 patients) with moderate HV, short follow-up and basic analysis . In our series, patients treated by RIO had a reduction in HVA and IMA, comparing preoperative values with those at the last follow-up with a mean correction of 8.69° and 2.42°, respectively. A slight and marginal reduction in the DMAA was also obtained in this group. These findings appear comparable to studies on the same surgical technique . Moreover, our results are in the range of correction found in the studies discussed in the review of Malagelada et al. . Isham himself stated that the average reduction of the HVA and IMA is especially noted when the RIO procedure is associated with Akin osteotomy and lateral release, which contributes to the lateral movement of the first metatarsal axis and decreases the varus deformity . For this last reason, combining Akin osteotomy during the most traditional procedures remains an attractive option also in open surgery . The MIIND group also had reduced HVA and IMA values. Using the intramedullary nail device resulted in better HVA correction compared to other studies . In the review performed by Jeyaseelan and Malagelada, the range of HVA correction comparing pre and postoperative values was 13.9–16.8° using the Endolog device and 12.9–20.8° using the Minimally Invasive/Percutaneous Chevron Akin (MICA or PECA), the percutaneous version of the MIIND techniques conceptually, but with a different fixation system (screws vs. nail) . In our study, the satisfying IMA correction by MIIND was similar to the values found in other studies, whose potential for improving IMA was previously highlighted in the systematic review of Malagelada et al. . The correction of DMAA, negligible in the RIO group through the closed wedge medial osteotomy, became an important correction in the MIIND group due to the concomitant operative derotation and the sizeable lateral displacement of the I- metatarsal head allowed by the device. At the same time, these two surgical steps promoted the reduction of the tibial sesamoid (TSP), while the internal fixation played a significative role in preventing the recurrence of valgism over time. In both groups, the postoperative values of the analysed angles improved compared to the preoperative ones, but the original correction tended to decrease several months after surgery while remaining within normal values. The satisfactory results of our study were comparable to those of studies performed using both RIO and MIIND techniques compared to other percutaneous internal fixation techniques . Di Giorgio et al., who compared RIO and MIIND techniques in 20 patients each, obtained excellent results in both groups but did not detect significant differences for HVA and IMA . However, the number of patients was smaller, the follow-up shorter, and only patients with moderate HV were enrolled. In our study, the MIIND technique provided more satisfactory correction of HVA and IMA than the RIO technique. This is because MIIND involves a complete translation of the MTH with internal fixation, providing a greater correction than the closed wedge medial osteotomy in severe HV . The potential for improving IMA with MIIND was also highlighted in the review of Malagelada et al. . Further, according to the authors, RIO appears to have the least potential for correcting both IMA and HVA angles compared to the other percutaneous and MIS techniques. However, these findings are not supported by evidence. Similar results were found by Lewis et al., who employed the PECA . It has already been reported that the traditional Chevron osteotomy when compared with traditional open surgery, like the scarf procedure, had significantly more favourable postoperative outcomes in terms of HVA correction but not in terms of IMA. Distal chevron osteotomy provides greater HVA correction than scarf osteotomy, while proximal Chevron provides a larger IMA correction than distal chevron osteotomy . This study used a logistic regression model to find the surgery-associated variables. Age, preoperative HVA values and HVA severity were the variables that affected the allocation of a patient to a specific group. Older patients, with greater preoperative HVA values and thus more severe HV, were more likely to be treated with MIIND. A PSM model was applied to compare the two groups, reducing the bias and confounding factors and allowing a robust comparison between the two groups of 30 people each. Significant differences were observed between the two groups regarding HVA and IMA (at three months, one year and sixty months) and sesamoids (at three months), while no differences were found regarding DMAA. Therefore, angle reduction not only occurs early but is maintained over time, even at a significant follow-up of 60 months. Regarding clinical outcomes, the AOFAS and NRS-11 scores improved after surgery in both groups. Similar results were observed by other authors . In the MIIND group, improvement in the AOFAS score was found at the last follow-up, with lower improvement values than in other studies, which had shorter follow-ups . Significantly lower preoperative levels than in our study can explain better AOFAS score improvement. However, after implementing the PSM algorithm, no differences in AOFAS scores were computed between the two groups at three months, one year and sixty months. Similarly, Di Giorgio et al. did not find significant differences in the clinical scores . No differences could be detected regarding VAS satisfaction at 60 months comparing the two groups in agreement with our study. A meta-analysis compared MIS vs. OPEN techniques, highlighting the superiority of MIS techniques in the early postoperative period (shorter surgery time, a more cosmetic scar, a higher satisfaction rate, and a faster recovery time) . Although MIS for HV is considered safe and effective, with radiological, functional, and clinical results comparable to open procedures, there is currently insufficient literature evidence to recommend MIS over open procedures or favour one MIS technique over another. Our study aligns with the literature that recommends using percutaneous or MIS techniques without internal fixation only for mild HV and suggests techniques involving internal fixation for severe HV . Percutaneous techniques such as RIO applied to severe radiological deformities may not give the same results, mainly because they lack internal fixation that maintains the correction . In our groups, minor complications (18.8%) were prevalent and resolved with medical therapy. The value was slightly lower than that reported in the literature (21.42% vs. 1% in the RIO group; 11.22% vs. 1–2% in the MIIND group) . Major complications occurred in 8.16% of patients treated by RIO and in 5.10% of the MIIND group. These were in line with those found by other authors (5–11% major complications in the RIO group and 2–3% in the MIIND one, respectively) , and they were resolved by revision surgery, yielding satisfying results after accurate preoperative planning and identification of causes of failure . The strengths of our study include (1) it is the first that analyses the long-term radiographic and clinical outcomes of a large patient cohort by performing RIO or MIIND techniques in association with Akin osteotomy, (2) both techniques were used to correct a painful HV only on one foot without additional procedures on the lateral rays, (3) the standardisation of patient operations and postoperative protocol, (4) the evaluation of the clinical and radiographic outcomes carried out separately by blinded investigators and (5) the application of the PSM analysis model, which enabled a robust comparison of the two groups. The main limitations of our study include (1) a single-centre study, (2) the retrospective nature of the study, (3) the relatively small sample size and (4) the use of the AOFAS score, only partially validated , having a single question related to pain and correlating poorly with the SF-36 in patients with foot complaints . However, AOFAS remains the most widespread health measurement in foot and ankle clinical practice, allowing the formulation of valid conclusions related to foot and ankle quality-of-life issues . Our study demonstrates the efficacy of the RIO and MIIND techniques assigned according to the severity of HV. We conclude that patients with higher HVA and moderate-to-severe HV should be treated with the MIIND technique, while subjects with mild-to-moderate deformity should undergo RIO. Overall, radiographic and clinical outcomes improved in patients treated by both methods. However, patients treated with MIIND had better angular values at all follow-ups compared to RIO, while no differences were observed regarding DMAA correction, AOFAS scores, NRS-11 and VAS satisfaction. Further studies are needed to confirm our data, such as randomised controlled trials with appropriate sample sizes, validated outcome measures, blinded assessors and long-term follow-up to determine the efficacy of MIS techniques. Below is the link to the electronic supplementary material. Supplementary Material 1 Supplementary Material 2 Supplementary Material 3 |
Impact of Immunosuppressive Drug Concentrations on Microvascular Inflammation, Negative Donor‐Specific Antibodies, and C4d‐Negative Status in Kidney Transplant Recipients | e282a4a3-7ec2-4265-a612-8b986bd29dc0 | 11843402 | Surgical Procedures, Operative[mh] | Introduction Kidney transplantation, the most established treatment for end‐stage renal disease, significantly improves the quality of life and survival rates of patients compared with dialysis . However, the success of transplantation largely depends on the effectiveness of immunosuppressive therapies that maintain the balance between graft rejection and long‐term adverse effects. Advances in immunosuppressive therapies have led to a reduction in the incidence of T cell‐mediated rejection, with an increasing focus now on the suppression of antibody‐mediated rejection (AMR) . Immunosuppressive therapy remains critical for this purpose, with reports indicating that blood concentrations of tacrolimus can influence the development of de novo donor‐specific antibodies (DSA) . Microvascular inflammation (MVI) refers to inflammation of the microvasculature within the kidney graft, particularly in the capillaries and small arteries. MVI, characterized by the combined assessment of the Banff g score and ptc score (g+ptc) , used to be considered one of the findings associated with AMR. However, in the Banff 2022 report, a new classification was proposed: MVI, DSA‐negative, and C4d‐negative . Possible causes for “MVI, DSA‐negative and C4d‐negative” include T cell‐mediated rejection, natural killer (NK) cell activation, infections, ischemia‐reperfusion injury, anti‐non‐human leukocyte antigen (HLA) antibodies, and thrombotic microangiopathy . However, the precise causes and mechanisms remain unclear. Furthermore, there are conflicting reports regarding the prognosis of cases histopathologically diagnosed as AMR but negative for DSA or C4d. Some studies suggest a favorable prognosis , whereas others report that MVI itself, irrespective of antibody dependence, affects graft survival . This study assessed the prognosis and underlying risk factors associated with MVI in kidney transplant recipients who were negative for DSA and lacked C4d deposition in the peritubular capillaries. By focusing on cases in which conventional indicators of AMR were absent, this study aimed to identify specific patient and clinical characteristics that may predispose patients to MVI and elucidate their impact on long‐term graft survival and function. This investigation aimed to advance the understanding of MVI pathogenesis in DSA‐ and C4d‐negative contexts, thereby contributing to improved post‐transplant management strategies. Methods 2.1 Ethics Statements The study protocol was approved by the Research Ethics Committee of our institution (approval number: 21374) and was conducted in accordance with the Declaration of Helsinki. All participants provided written informed consent. 2.2 Study Design and Population This prospective observational study included 268 DSA‐negative living kidney transplant recipients, excluding cases with preformed DSA‐positive kidney transplants or those with de novo DSA or C4d deposition in peritubular capillaries, from the cases performed between 2013 and 2022, at the Department of Urology, Osaka Graduate School of Medicine. We did not exclude cases of C4d deposition in peritubular capillaries in ABO‐incompatible kidney transplantation. The recipients received induction therapy with basiliximab and immunosuppressive therapy comprising extended‐release tacrolimus, mycophenolate mofetil (MMF), and/or everolimus, with or without steroids. In cases of ABO‐incompatible kidney transplantation, plasma exchange, and rituximab were also administered. The recipients were divided into two groups: MVI+DSA‐C4d‐ ( n = 31) and MVI‐DSA‐C4d‐ ( n = 237). For ABO blood‐type compatible kidney transplantation, tacrolimus was started at a dose of 0.15 mg/kg/d 4 days before transplantation, and trough levels were adjusted to 5–8 ng/mL after transplantation. MMF was started at a dose of 1000 mg/d 4 days before transplantation and was adjusted to 2000 mg/d for 2 weeks post‐transplantation, 1500 mg/d during post‐transplantation week 2–4, and 1000 mg/d after 4 weeks post‐transplantation. Everolimus was initiated at a dose of 3 mg after transplantation and adjusted to achieve a trough level of 3–8 ng/mL. Steroids were discontinued 22 days after kidney transplantation. For ABO blood type‐incompatible kidney transplantation, MMF was started 14 days before transplantation, and tacrolimus was started 7 days before transplantation and adjusted after transplantation, as in blood type‐compatible cases. Patients who underwent ABO‐incompatible kidney transplantation received rituximab infusion and plasma exchange. No recipients had donor‐specific anti‐HLA antibodies. Graft biopsies were routinely performed 3, 12, 36, and 60 months after kidney transplantation. Additionally, biopsies were performed in patients with elevated serum creatinine levels. The pathological diagnosis was conducted according to the Banff 2022 guidelines, with MVI defined using a threshold of g+ptc≥2 . All biopsy specimens were re‐evaluated retrospectively in accordance with the Banff 2022 to ensure consistent pathological assessment. Two independent transplant pathologists, blinded to clinical outcomes, reviewed all available biopsy specimens. Discrepancies were resolved through consensus. In cases of ABO‐incompatible kidney transplantation, if C4d deposition was present but DSA was negative, it was considered C4d‐negative. 2.3 Definitions Graft failure was defined as the return to dialysis. Mortality was defined as death owing to any cause. The estimated glomerular filtration rate (eGFR) was calculated using a modified Japanese equation . Recipients undergoing treatment for dyslipidemia were defined as having dyslipidemia. Clinical data, including laboratory data, were collected monthly after kidney transplantation. 2.4 Statistical Analysis Data are presented as means with standard deviation and frequency (percentage). A t ‐test was used to analyze continuous parameters with skewed normal distributions. Non‐normally distributed variables were compared between groups using the Mann–Whitney U test; the findings are presented as medians with interquartile ranges (25%–75%). The χ 2 ‐test or Fisher's exact was used to compare differences in the proportions of nominal‐level variables. Multivariate logistic regression analysis was performed to identify independent risk factors for MVI+DSA‐C4d‐. All data were analyzed using the REDCap electronic registration software (Vanderbilt University, Nashville, TN, USA), and all statistical analyses were performed using R software (version 4.3.1; The R Project for Statistical Computing Vienna, Austria). Statistical significance was set at a two‐tailed p value < 0.05. Ethics Statements The study protocol was approved by the Research Ethics Committee of our institution (approval number: 21374) and was conducted in accordance with the Declaration of Helsinki. All participants provided written informed consent. Study Design and Population This prospective observational study included 268 DSA‐negative living kidney transplant recipients, excluding cases with preformed DSA‐positive kidney transplants or those with de novo DSA or C4d deposition in peritubular capillaries, from the cases performed between 2013 and 2022, at the Department of Urology, Osaka Graduate School of Medicine. We did not exclude cases of C4d deposition in peritubular capillaries in ABO‐incompatible kidney transplantation. The recipients received induction therapy with basiliximab and immunosuppressive therapy comprising extended‐release tacrolimus, mycophenolate mofetil (MMF), and/or everolimus, with or without steroids. In cases of ABO‐incompatible kidney transplantation, plasma exchange, and rituximab were also administered. The recipients were divided into two groups: MVI+DSA‐C4d‐ ( n = 31) and MVI‐DSA‐C4d‐ ( n = 237). For ABO blood‐type compatible kidney transplantation, tacrolimus was started at a dose of 0.15 mg/kg/d 4 days before transplantation, and trough levels were adjusted to 5–8 ng/mL after transplantation. MMF was started at a dose of 1000 mg/d 4 days before transplantation and was adjusted to 2000 mg/d for 2 weeks post‐transplantation, 1500 mg/d during post‐transplantation week 2–4, and 1000 mg/d after 4 weeks post‐transplantation. Everolimus was initiated at a dose of 3 mg after transplantation and adjusted to achieve a trough level of 3–8 ng/mL. Steroids were discontinued 22 days after kidney transplantation. For ABO blood type‐incompatible kidney transplantation, MMF was started 14 days before transplantation, and tacrolimus was started 7 days before transplantation and adjusted after transplantation, as in blood type‐compatible cases. Patients who underwent ABO‐incompatible kidney transplantation received rituximab infusion and plasma exchange. No recipients had donor‐specific anti‐HLA antibodies. Graft biopsies were routinely performed 3, 12, 36, and 60 months after kidney transplantation. Additionally, biopsies were performed in patients with elevated serum creatinine levels. The pathological diagnosis was conducted according to the Banff 2022 guidelines, with MVI defined using a threshold of g+ptc≥2 . All biopsy specimens were re‐evaluated retrospectively in accordance with the Banff 2022 to ensure consistent pathological assessment. Two independent transplant pathologists, blinded to clinical outcomes, reviewed all available biopsy specimens. Discrepancies were resolved through consensus. In cases of ABO‐incompatible kidney transplantation, if C4d deposition was present but DSA was negative, it was considered C4d‐negative. Definitions Graft failure was defined as the return to dialysis. Mortality was defined as death owing to any cause. The estimated glomerular filtration rate (eGFR) was calculated using a modified Japanese equation . Recipients undergoing treatment for dyslipidemia were defined as having dyslipidemia. Clinical data, including laboratory data, were collected monthly after kidney transplantation. Statistical Analysis Data are presented as means with standard deviation and frequency (percentage). A t ‐test was used to analyze continuous parameters with skewed normal distributions. Non‐normally distributed variables were compared between groups using the Mann–Whitney U test; the findings are presented as medians with interquartile ranges (25%–75%). The χ 2 ‐test or Fisher's exact was used to compare differences in the proportions of nominal‐level variables. Multivariate logistic regression analysis was performed to identify independent risk factors for MVI+DSA‐C4d‐. All data were analyzed using the REDCap electronic registration software (Vanderbilt University, Nashville, TN, USA), and all statistical analyses were performed using R software (version 4.3.1; The R Project for Statistical Computing Vienna, Austria). Statistical significance was set at a two‐tailed p value < 0.05. Results 3.1 Patient Demographics Baseline characteristics exhibited no statistically significant differences between the MVI+DSA‐C4d‐ and MVI‐DSA‐C4d cohorts (Table ). Variables such as recipient age (50.68 ± 13.70 vs. 50.18 ± 13.67 years, p = 0.848), body mass index (22.33 ± 3.45 vs. 22.12 ± 4.11 kg/m 2 , p = 0.791), and dialysis duration (median 2.73 vs. 4.55 years, p = 0.20) demonstrated equivalent distributions. End‐stage renal disease etiologies and HLA mismatch frequencies were similar across the groups. Notably, steroid withdrawal was significantly more prevalent in the MVI+DSA‐C4d‐ group (51.6% vs. 32.9%, p = 0.047), indicating a potential correlation with the incidence of MVI+DSA‐C4d‐. 3.2 The Details of MVI Scores The g scores and ptc scores at the time of diagnosis and the final follow‐up biopsies (87.1% conducted at 60 months and the remaining at 36 months) are shown in Table . At the time of diagnosis, 22 cases (71.0%) were classified as g1+ptc1, five cases (16.1%) as g1+ptc2, two cases (6.5%) as g2+ptc2, and one case each (3.2%) as g1+ptc3 and g3+ptc3. In the follow‐up biopsies, 17 cases (54.8%) showed resolution of MVI, while three cases (9.7%) showed resolution of ptc but persistence of g, and another three cases (9.7%) showed persistence of ptc only. MVI persisted in eight cases (25.8%). In all cases where MVI persisted, the MVI scores remained stable or showed a trend toward improvement. Transplant glomerulopathy was observed in only one case. The proportion of g1+ptc1 was 69.6% in ABO‐compatible kidney transplantation and 75.0% in ABO‐incompatible cases, with no indication that ABO‐incompatible cases had higher MVI scores. No cases showed DSA or C4d deposition during the observation period. 3.3 Graft and Patient Survival Graft survival rates were not statistically different between groups, with a 5‐year survival rate of 95.5% in the MVI+DSA‐C4d‐ group versus 96.6% in the MVI‐DSA‐C4d‐ group ( p = 0.772, Figure ). Additionally, patient survival at 5 years showed no significant differences between the groups (95.7% vs. 95.9%, p = 0.735, Figure ). Furthermore, we compared the groups with positive and negative g status among those negative for DSA and C4d; however, no significant differences were observed in graft or patient survival rates. Similar results were observed for the ptc status. 3.4 Immunosuppressive Drug Concentrations Low trough concentrations of tacrolimus and everolimus were significantly associated with a higher rate of antibody‐independent MVI. Among those negative for DSA and C4d, comparisons between groups with positive and negative g status, as well as ptc status, revealed no significant differences in graft or patient survival rates. Tacrolimus levels in the MVI+DSA‐C4d‐ group averaged 3.10 ± 0.94 ng/mL, which was significantly lower than the average of 5.25 ± 1.54 ng/mL in the MVI‐DSA‐C4d‐ group. Everolimus concentrations were also lower in the MVI+DSA‐C4d group (2.96 ± 1.16 ng/mL vs. 4.37 ± 1.38 ng/mL). The odds ratios (ORs) were 0.169 (95% confidence intervals [CI], 0.055–0.515; p = 0.002) for tacrolimus and 0.386 (95% CI, 0.171–0.874; p = 0.022) for everolimus. Conversely, mycophenolic acid (MPA) levels and (OR, 0.994; 95% CI, 0.554–1.780; p = 0.984) steroid withdrawal (OR, 1.980; 95% CI, 0.318–12.000; p = 0.470) exhibited no significant associations with MVI+DSA‐C4d‐ risk. Both the g+DSA‐C4d and ptc+DSA‐C4d groups had lower trough concentrations of these drugs than their respective negative counterparts. Multivariate models further substantiated that lower trough concentrations of tacrolimus (OR, 0.169; 95% CI, 0.056–0.515; p = 0.0018) and everolimus (OR, 0.386; 95% CI, 0.171–0.874; p = 0.0224) independently correlated with MVI+DSA‐C4d‐ onset (Table ). In contrast, MPA levels and steroid discontinuation were not statistically significant predictors of MVI+DSA‐C4d‐ in either univariate or multivariate analyses. 3.5 Graft Function Graft function, indicated by creatinine levels, eGFR, and urinary protein levels, did not show statistically significant differences between the MVI+DSA‐C4d‐ and MVI‐DSA‐C4d‐ groups. The mean creatinine level was 1.30 ± 0.43 mg/dL in the MVI+DSA‐C4d‐ group and 1.38 ± 0.46 mg/dL in the MVI‐DSA‐C4d‐ group ( p = 0.370). Similarly, the eGFR was comparable between the groups (41.54 ± 19.04 mL/min/1.73 m 2 in the MVI+DSA‐C4d‐ group and 43.66 ± 15.05 mL/min/1.73 m 2 in the MVI‐DSA‐C4d‐ group; p = 0.566). Urinary protein levels, measured as both mg/dL and g/Cr, did not differ significantly between the groups, with p values of 0.117 and 0.427, respectively. These results suggest that the preservation of graft function was similar in both groups, regardless of the MVI+DSA‐C4d‐ status. 3.6 Clinical Complications and Additional Variables The incidences of viral infections, cardiovascular disease, hypertension, and dyslipidemia were statistically comparable between the groups. Similarly, the occurrence rates of new‐onset diabetes mellitus post‐transplantation and de novo DSA did not differ significantly, suggesting that MVI+DSA‐C4d‐ status did not exert a notable influence on these post‐transplantation complications (Table ). Patient Demographics Baseline characteristics exhibited no statistically significant differences between the MVI+DSA‐C4d‐ and MVI‐DSA‐C4d cohorts (Table ). Variables such as recipient age (50.68 ± 13.70 vs. 50.18 ± 13.67 years, p = 0.848), body mass index (22.33 ± 3.45 vs. 22.12 ± 4.11 kg/m 2 , p = 0.791), and dialysis duration (median 2.73 vs. 4.55 years, p = 0.20) demonstrated equivalent distributions. End‐stage renal disease etiologies and HLA mismatch frequencies were similar across the groups. Notably, steroid withdrawal was significantly more prevalent in the MVI+DSA‐C4d‐ group (51.6% vs. 32.9%, p = 0.047), indicating a potential correlation with the incidence of MVI+DSA‐C4d‐. The Details of MVI Scores The g scores and ptc scores at the time of diagnosis and the final follow‐up biopsies (87.1% conducted at 60 months and the remaining at 36 months) are shown in Table . At the time of diagnosis, 22 cases (71.0%) were classified as g1+ptc1, five cases (16.1%) as g1+ptc2, two cases (6.5%) as g2+ptc2, and one case each (3.2%) as g1+ptc3 and g3+ptc3. In the follow‐up biopsies, 17 cases (54.8%) showed resolution of MVI, while three cases (9.7%) showed resolution of ptc but persistence of g, and another three cases (9.7%) showed persistence of ptc only. MVI persisted in eight cases (25.8%). In all cases where MVI persisted, the MVI scores remained stable or showed a trend toward improvement. Transplant glomerulopathy was observed in only one case. The proportion of g1+ptc1 was 69.6% in ABO‐compatible kidney transplantation and 75.0% in ABO‐incompatible cases, with no indication that ABO‐incompatible cases had higher MVI scores. No cases showed DSA or C4d deposition during the observation period. Graft and Patient Survival Graft survival rates were not statistically different between groups, with a 5‐year survival rate of 95.5% in the MVI+DSA‐C4d‐ group versus 96.6% in the MVI‐DSA‐C4d‐ group ( p = 0.772, Figure ). Additionally, patient survival at 5 years showed no significant differences between the groups (95.7% vs. 95.9%, p = 0.735, Figure ). Furthermore, we compared the groups with positive and negative g status among those negative for DSA and C4d; however, no significant differences were observed in graft or patient survival rates. Similar results were observed for the ptc status. Immunosuppressive Drug Concentrations Low trough concentrations of tacrolimus and everolimus were significantly associated with a higher rate of antibody‐independent MVI. Among those negative for DSA and C4d, comparisons between groups with positive and negative g status, as well as ptc status, revealed no significant differences in graft or patient survival rates. Tacrolimus levels in the MVI+DSA‐C4d‐ group averaged 3.10 ± 0.94 ng/mL, which was significantly lower than the average of 5.25 ± 1.54 ng/mL in the MVI‐DSA‐C4d‐ group. Everolimus concentrations were also lower in the MVI+DSA‐C4d group (2.96 ± 1.16 ng/mL vs. 4.37 ± 1.38 ng/mL). The odds ratios (ORs) were 0.169 (95% confidence intervals [CI], 0.055–0.515; p = 0.002) for tacrolimus and 0.386 (95% CI, 0.171–0.874; p = 0.022) for everolimus. Conversely, mycophenolic acid (MPA) levels and (OR, 0.994; 95% CI, 0.554–1.780; p = 0.984) steroid withdrawal (OR, 1.980; 95% CI, 0.318–12.000; p = 0.470) exhibited no significant associations with MVI+DSA‐C4d‐ risk. Both the g+DSA‐C4d and ptc+DSA‐C4d groups had lower trough concentrations of these drugs than their respective negative counterparts. Multivariate models further substantiated that lower trough concentrations of tacrolimus (OR, 0.169; 95% CI, 0.056–0.515; p = 0.0018) and everolimus (OR, 0.386; 95% CI, 0.171–0.874; p = 0.0224) independently correlated with MVI+DSA‐C4d‐ onset (Table ). In contrast, MPA levels and steroid discontinuation were not statistically significant predictors of MVI+DSA‐C4d‐ in either univariate or multivariate analyses. Graft Function Graft function, indicated by creatinine levels, eGFR, and urinary protein levels, did not show statistically significant differences between the MVI+DSA‐C4d‐ and MVI‐DSA‐C4d‐ groups. The mean creatinine level was 1.30 ± 0.43 mg/dL in the MVI+DSA‐C4d‐ group and 1.38 ± 0.46 mg/dL in the MVI‐DSA‐C4d‐ group ( p = 0.370). Similarly, the eGFR was comparable between the groups (41.54 ± 19.04 mL/min/1.73 m 2 in the MVI+DSA‐C4d‐ group and 43.66 ± 15.05 mL/min/1.73 m 2 in the MVI‐DSA‐C4d‐ group; p = 0.566). Urinary protein levels, measured as both mg/dL and g/Cr, did not differ significantly between the groups, with p values of 0.117 and 0.427, respectively. These results suggest that the preservation of graft function was similar in both groups, regardless of the MVI+DSA‐C4d‐ status. Clinical Complications and Additional Variables The incidences of viral infections, cardiovascular disease, hypertension, and dyslipidemia were statistically comparable between the groups. Similarly, the occurrence rates of new‐onset diabetes mellitus post‐transplantation and de novo DSA did not differ significantly, suggesting that MVI+DSA‐C4d‐ status did not exert a notable influence on these post‐transplantation complications (Table ). Discussion This study provides new insights into the effects of immunosuppressive drug concentrations on antibody‐independent MVI in kidney transplant patients and highlights the potential of maintaining optimal tacrolimus and everolimus levels to reduce the incidence of MVI without the need for steroids. MVI in kidney transplant recipients is characterized by inflammation of small blood vessels in the transplanted kidney . The Banff 2022 meeting introduced significant updates to the classification of AMR and MVI in kidney transplant pathology . One of the key additions was the recognition of a specific phenotype of MVI that occurs in the absence of DSA and without C4d deposition in peritubular capillaries. Such cases are often grouped into broader categories that do not specifically account for their unique pathological features. This sometimes leads to confusion during diagnosis and treatment because the mechanisms underlying these conditions are not well understood. Inflammation without the presence of DSA and C4d deposition in the peritubular capillaries suggests a different mechanism compared with typical antibody‐mediated rejection . The absence of DSA indicates that the inflammation is not caused by antibodies targeting donor cells, and the lack of C4d deposition further supports this as it indicates no activation of the complement pathway. The Banff 2022 classification will facilitate more precise diagnostic and therapeutic approaches, encouraging further research into the underlying causes and optimal management of MVI in the absence of DSA and C4d. Recent insights into MVI have reshaped the conventional view that antibodies and complement are the sole contributors to MVI development. These findings reveal that MVI can arise independent of antibody involvement. Antibody‐independent MVI appears to be driven by the activation of NK cells via a “missing self” mechanism, a mismatch between donor HLA and recipient inhibitory killer‐cell immunoglobulin‐like receptors that ultimately disrupts the immune equilibrium. The resulting NK cell activation inflicts damage on graft endothelial cells, affecting graft survival. Survival analyses show that patients with MVI (both MVI+DSA+ and MVI+DSA‐) experience significantly poorer outcomes compared with those without MVI (MVI‐DSA‐) . However, in our study, the presence or absence of MVI, DSA‐negative status, and C4d‐negative status did not significantly affect patient prognosis. Several factors may be responsible for this discrepancy. First, cases classified as MVI, DSA‐negative, and C4d‐negative may include instances where mechanisms unrelated to the “missing self”, such as ischemia‐reperfusion injury, non‐HLA antibody involvement, or other immune interactions, are at play. This reflects the complexity and diversity of post‐transplantation immune interactions. Second, once MVI is identified in kidney transplant biopsies in clinical practice, timely modifications are made to the immunosuppressive therapy, including dose increases or delays in reduction. These responsive adjustments may contribute to mitigating the adverse effects typically associated with MVI and could account for the improved outcomes observed in our analysis. Furthermore, all cases of MVI in this study were identified through protocol biopsies, which may explain the lack of adverse outcomes associated with MVI+DSA‐C4d‐. Protocol biopsies enable the early detection of subclinical inflammation and allow timely adjustments to immunosuppressive regimens, potentially mitigating the progression of MVI and preserving graft function. In contrast, MVI identified in for‐cause biopsies, often performed in response to graft dysfunction, may reflect more severe immune injury and could be associated with worse outcomes. This distinction underscores the importance of routine protocol biopsies in identifying and managing MVI before it manifests as clinically significant graft dysfunction. These findings highlight the need for ongoing exploration of MVI pathogenesis and its impact on graft survival, particularly in DSA‐negative and C4d‐negative cases, as well as the role of protocol biopsies in mitigating these effects. These findings underscore the need for ongoing exploration of the influence of MVI on transplant prognosis, especially with the emergence of new therapeutic strategies. In particular, the discovery that NK cell activation by the “missing self” mechanism is mediated via the mTORC1 pathway highlights a promising therapeutic target . In preclinical studies, the mTOR inhibitor rapamycin has shown efficacy in curbing the progression of chronic vascular rejection associated with the “missing self” mechanism. In our study, we observed a correlation between blood everolimus levels and MVI, DSA‐negative status, and C4d‐negative status, suggesting a potential protective effect. Everolimus has been reported to inhibit both mTORC1 and mTORC2 more effectively than sirolimus, with the inhibition of mTORC2 specifically suppressing endothelial cell functional changes following HLA class I cross‐linking, which are implicated in chronic rejection . However, its effects on NK cells remain unclear, warranting further investigation. Multiple studies have established a relationship between tacrolimus blood concentration and the emergence of de novo DSA and AMR . Although our findings indicate that tacrolimus may be correlated with MVI, DSA‐negative status, and C4d‐negative status, this is likely due to its effect on weak antigen‐antibody responses or T cell‐mediated reactions. These findings imply that although both everolimus and tacrolimus appear to reduce the incidence of MVI and DSA‐negative‐ and C4d‐negative cases, their mechanisms of action may differ fundamentally. Although the utility of regimens combining tacrolimus and everolimus has been previously reported , our findings provide valuable insights for further investigations regarding optimal immunosuppressive strategies to address antibody‐independent MVI and improve overall transplant outcomes. The univariate analysis showed a statistically significant association between steroid withdrawal and MVI+DSA‐C4d‐ incidence, but this association was not observed in the multivariate analysis. This discrepancy suggests that steroid withdrawal may not independently predict MVI risk and that other factors, such as tacrolimus and everolimus levels, play a more significant role. Although our findings emphasize the importance of maintaining optimal tacrolimus and everolimus levels, the role of steroids in mitigating MVI cannot be entirely excluded and warrants further investigation in larger cohorts. This study has several limitations, including its small sample size, single‐center design, observational nature restricting causality assessment, short observation period, clinical heterogeneity, and the potential influence of confounding factors. Additionally, this study exclusively focused on living donor kidney transplant recipients, which limits the generalizability of the findings to deceased donor transplant. Differences in immunological risk profiles, ischemia‐reperfusion injury, and graft survival outcomes between living and deceased donor transplants may influence the incidence and clinical relevance of MVI. Furthermore, this study relied on trough concentrations of immunosuppressive agents as a surrogate for total exposure, which is a recognized limitation. Trough levels, although practical for routine clinical use, may not fully capture total drug exposure, particularly for MMF, whose area under the curve (AUC) may better correlate with its pharmacodynamic effects. The lack of association between MMF levels and MVI+DSA‐D4d in our study may partly be attributed to this limitation. Further studies employing pharmacokinetic modeling or AUC measurements could provide more robust insights into the relationship between drug exposure and MVI risk. Nevertheless, this study provides valuable new insights into DSA‐negative‐ and C4d‐negative MVI and highlights the impact of immunosuppressive drug concentrations on the incidence of MVI. These findings provide an important foundation for future multicenter studies and long‐term prospective trials aimed at obtaining comprehensive and reliable data to refine immunosuppressive management strategies. It is important to note that this study excluded cases with preformed DSA. Early postoperative DSA‐negative and D4d‐negative MVI in recipients with preformed DSA may require careful attention, as it could pose a risk of progressing to AMR. In conclusion, the current study underscores the significance of maintaining appropriate tacrolimus and everolimus trough levels to manage DSA‐negative‐ and C4d‐negative MVI, thereby providing insights into the nuanced effects of immunosuppression beyond conventional antibody‐mediated pathways. These findings suggest that maintaining optimal levels of these drugs may help prevent adverse graft outcomes associated with antibody‐independent MVI. Future studies should investigate the molecular mechanisms by which tacrolimus and everolimus modulate immune responses in kidney transplantation. This may involve exploring the roles of individual drug metabolism, patient genetic profiles, and interactions with NK cell pathways, potentially paving the way for a more personalized and effective approach to immunosuppressive therapy in kidney transplantation. Yoichi Kakuta contributed to the research design, writing of the manuscript, research performance, and data analysis. Yoko Maegawa‐Higa contributed to the research design. Soichi Matsumura contributed to the research design and data analysis. Shota Fukae contributed to the research design and data analysis. Ryo Tanaka contributed to the research design and data analysis. Hiroaki Yonishi contributed to the research design, investigation, and data analysis. Shigeaki Nakazawa contributed to the research design, investigation, and data analysis. K.Y. contributed to the research design, investigation, and data analysis. Tomoko Namba‐Hamano contributed to the research design and pathological diagnosis. Yoshitaka Isaka contributed to the research design and investigation. Norio Nonomura contributed to the research design and investigation. The authors declare no conflicts of interest. |
Identification of HPV16 Lineages in South African and Mozambican Women with Normal and Abnormal Cervical Cytology | 56917471-c0f4-4ad4-937f-d6221400e34f | 11360388 | Pathology[mh] | It is well-established that the Human Papillomavirus (HPV) is the primary causative agent of cervical cancer . HPV infection is one of the most common sexually transmitted infections and can naturally be cleared by the host immune system within years after acquisition . The International Agency for Research on Cancer has identified more than 200 HPV genotypes and categorised them into high-risk (HrHPV) and low-risk (LrHPV) types according to their potential to induce malignancy . Eighteen HrHPV types (HPV16, 18, 26, 31, 33, 35, 39, 45, 51, 52, 53, 56, 58,59, 66, 68, 73, and 83) have been identified as oncogenic and are extensively studied for their role in cervical cancer . Persistent infections with these HPV types can cause a cellular change that leads to the development of cervical intraepithelial neoplasia and, eventually, invasive cervical cancer . Although the importance of each genotype may differ by region, Human papillomavirus 16 and 18 (HPV16 and HPV18) are involved in more than 70% of cervical cancers worldwide, and the HPV16 genotype is one of the most prevalent in the sub-Saharan Africa region . HPV16’s genome size is approximately 8000 bases in length and comprises three functional regions: a non-coding upstream regulatory region (URR), also known as the long control region (LCR) that contains regulatory elements for viral replication and transcription; an early region formed by E1, E2, and E4–E7 genes that encode core viral proteins from which E6 and E7 proteins are responsible for transformations of the host cell; a late region encoding L1 and L2 capsid proteins related to the viral DNA packaging and assembling in an icosahedral structure . Based on sequence diversity, HPV16 has been grouped into four phylogenetic lineages, A, B, C, and D, and all of them have been implicated in cervical carcinogenesis . These lineages differ between 1.0% and 10% of nucleotides at the whole-genome level . When there is between 0.5% and 1.0% of the nucleotide difference between two variants from the same lineage, they become divided into sublineages . The four identified variants of HPV16 are divided into sublineages A1–3 (formerly termed European), A4 (Asian); B (African-1), C (African-2), and D1–3 (North American, Asian-American) and have been associated with different cervical precancer and cancer risk . Even within HPV16 variants, genetic polymorphisms may play a key role in infection persistence and oncogenic potential . Geographically and ethnically, HPV16 variant distribution and associated carcinogenicity varies worldwide . The European HPV16 A1, A2, and A3 sublineages account for the majority of HPV16 infections. At the same time, non-European B, C, and D variants are associated with an increased risk of cancer progression and severity of cervical lesions compared to the European variants . Different nucleotide mutations in the HPV16 LCR related to altered pathways involved in viral persistence and cancer development have been reported. As an example, a Chinese study described two LCR nucleotide mutations (G7193T and G7518A) which were the potential binding sites of FOXA1 (forkhead box protein A1) and SOX9 (sex-determining region Y-box 9) transcriptional factors, respectively . Furthermore, single nucleotide polymorphisms (SNPs) and nucleotide duplications in the LCR and E6 sequence regions were directly related to cervical cancer severity . HPV16 is one of the most prevalent genotypes in South Africa and Mozambique, but there is a lack of information regarding the distribution and circulation of the HPV16 variants . Knowledge of HPV16 variants circulating in a specific geographical region is one of the most important tools for cervical cancer prevention goals. This study aimed to investigate the HPV16 variant distribution in cervical samples collected from women with normal and abnormal cervical cytology in South Africa and Mozambique and to assess the phylogenetic relationships among variants. 2.1. Population Study and Samples Samples were collected from women aged 30–98 years who were attending the community health clinic and the referral clinic within the OR Tambo district municipality, Mthatha, for cervical cancer screening or any other reason between September 2017 and August 2018. Samples were collected using cervical brushes for HPV testing and were stored in a Digene transport medium (Qiagen Inc., Gaithersburg, MD, USA). Additionally, non-pregnant women seeking care regarding gynaecological symptoms such as venereal pain, genital ulcers, and vaginal discharge or seeking family planning services were enrolled in health facilities in the Mavalane Health area in Maputo between February 2018 and July 2019. Samples were collected using cervical brushes that were stored in BD SurePath Collection Vials (Becton, Dickinson and Company, Franklin Lakes, NJ, USA). All the study samples were stored at −80 °C until HPV genotyping. Pap smear and colonoscopy exams were performed to assess the cervical cytological alterations that were classified according to the 2001 Bethesda System. 2.2. DNA Extraction and HPV Genotyping DNA extraction was performed using a MagNA Pure Compact (Roche Diagnostics, Indianapolis, IN, USA) and the MagNA Pure Compact Nucleic Acid Isolation Kit (Roche Diagnostics, IN, USA) following the manufacturer’s instructions. HPV genotyping was performed using a multiplex HPV Direct Flow CHIP Kit (Vitro Master Diagnóstica, Sevilla, Spain) through the amplification of a fragment in the viral region L1 of HPV using polymerase chain reaction (PCR) according to the manufacturer’s instruction. Then, Hybridisation onto a membrane with DNA-specific probes was performed using the DNA-Flow technology for manual HybriSpot platforms according to the manufacturer’s instructions. 2.3. DNA Amplification and Sequencing HPV16 positive DNA samples were subjected to amplification in a 50 μL reaction mix containing Prime Star GXL DNA Polymerase (Takara Bio, Tokyo, Japan), 0.75 μM of each primer (16-F101 and 16-R20) , and 5 μL of the template sample. Thermocycling conditions used were: 95 °C for 5 min followed by 40 cycles of 98 °C for 30 s, 63 °C for 30 s, 72 °C for 2 min, and final elongation at 72 °C for 5 min. Two percent agarose gel stained with ethidium bromide solution (Sigma Aldrich, Milwaukee, WI, USA) was used to visualise the amplicons. Nucleotide sequences were obtained through the Sanger sequencing method (BigDye Terminator Cycle Sequencing kit v1.1), and the quality of the resulting sequence fragment corresponding to an 1160 bp stretch covering the entire LCR and the 300 nt of the E6 ORF was analysed using FastQC software (Babraham Bioinformatics, V 0.12.0). Sequences with insufficient quality scores were discarded after their analysis repetition. 2.4. Phylogenetic and Statistical Analyses The study sequences were aligned with 16 reference sequences belonging to HPV16 sublineages: KU053910, KU053914, HQ644298, and AF536180 from C lineage (African 1); AF472509, KU053922, HQ644244, and KU053921 from B lineage (African 2); AF536179, NC001526, and HQ644236 from A1–3 lineages (European); KU053933, AY686579, AF402678, and AF534061 from the A4 lineage (Asian), and HQ644257 from D lineage (Asian-American). All the reference sequences were obtained from Papilloma Episteme (PaVE, https://pave.niaid.nih.gov , accessed on 21 June 2023). Nucleotide alignment was performed using Geneious prime software (Dotmatics, V.2022.2). A Maximum Composite Likelihood phylogenetic tree was generated in MEGA 11 using the UPGMA method . The reliability of the observed clades was shown with internal node bootstrap values of ≥70% (after 1000 replicates). Graphics were generated using GraphPad Prism version 7.2 (GraphPad Software, San Diego, CA, USA). The prevalence of HPV16 and its lineages was calculated. Categorical variables were summarised using percentages as appropriate. When the data were presented as proportions of the total sample, the missing data were excluded from the denominator. All sequences were submitted to the NCBI database, and their accession numbers in GenBank range from PQ178098 to PQ178155. Samples were collected from women aged 30–98 years who were attending the community health clinic and the referral clinic within the OR Tambo district municipality, Mthatha, for cervical cancer screening or any other reason between September 2017 and August 2018. Samples were collected using cervical brushes for HPV testing and were stored in a Digene transport medium (Qiagen Inc., Gaithersburg, MD, USA). Additionally, non-pregnant women seeking care regarding gynaecological symptoms such as venereal pain, genital ulcers, and vaginal discharge or seeking family planning services were enrolled in health facilities in the Mavalane Health area in Maputo between February 2018 and July 2019. Samples were collected using cervical brushes that were stored in BD SurePath Collection Vials (Becton, Dickinson and Company, Franklin Lakes, NJ, USA). All the study samples were stored at −80 °C until HPV genotyping. Pap smear and colonoscopy exams were performed to assess the cervical cytological alterations that were classified according to the 2001 Bethesda System. DNA extraction was performed using a MagNA Pure Compact (Roche Diagnostics, Indianapolis, IN, USA) and the MagNA Pure Compact Nucleic Acid Isolation Kit (Roche Diagnostics, IN, USA) following the manufacturer’s instructions. HPV genotyping was performed using a multiplex HPV Direct Flow CHIP Kit (Vitro Master Diagnóstica, Sevilla, Spain) through the amplification of a fragment in the viral region L1 of HPV using polymerase chain reaction (PCR) according to the manufacturer’s instruction. Then, Hybridisation onto a membrane with DNA-specific probes was performed using the DNA-Flow technology for manual HybriSpot platforms according to the manufacturer’s instructions. HPV16 positive DNA samples were subjected to amplification in a 50 μL reaction mix containing Prime Star GXL DNA Polymerase (Takara Bio, Tokyo, Japan), 0.75 μM of each primer (16-F101 and 16-R20) , and 5 μL of the template sample. Thermocycling conditions used were: 95 °C for 5 min followed by 40 cycles of 98 °C for 30 s, 63 °C for 30 s, 72 °C for 2 min, and final elongation at 72 °C for 5 min. Two percent agarose gel stained with ethidium bromide solution (Sigma Aldrich, Milwaukee, WI, USA) was used to visualise the amplicons. Nucleotide sequences were obtained through the Sanger sequencing method (BigDye Terminator Cycle Sequencing kit v1.1), and the quality of the resulting sequence fragment corresponding to an 1160 bp stretch covering the entire LCR and the 300 nt of the E6 ORF was analysed using FastQC software (Babraham Bioinformatics, V 0.12.0). Sequences with insufficient quality scores were discarded after their analysis repetition. The study sequences were aligned with 16 reference sequences belonging to HPV16 sublineages: KU053910, KU053914, HQ644298, and AF536180 from C lineage (African 1); AF472509, KU053922, HQ644244, and KU053921 from B lineage (African 2); AF536179, NC001526, and HQ644236 from A1–3 lineages (European); KU053933, AY686579, AF402678, and AF534061 from the A4 lineage (Asian), and HQ644257 from D lineage (Asian-American). All the reference sequences were obtained from Papilloma Episteme (PaVE, https://pave.niaid.nih.gov , accessed on 21 June 2023). Nucleotide alignment was performed using Geneious prime software (Dotmatics, V.2022.2). A Maximum Composite Likelihood phylogenetic tree was generated in MEGA 11 using the UPGMA method . The reliability of the observed clades was shown with internal node bootstrap values of ≥70% (after 1000 replicates). Graphics were generated using GraphPad Prism version 7.2 (GraphPad Software, San Diego, CA, USA). The prevalence of HPV16 and its lineages was calculated. Categorical variables were summarised using percentages as appropriate. When the data were presented as proportions of the total sample, the missing data were excluded from the denominator. All sequences were submitted to the NCBI database, and their accession numbers in GenBank range from PQ178098 to PQ178155. 3.1. Study Population The demographical data of the South African participants can be accessed in Taku et al. 2020 , and for Mozambican participants in Maueia et al., 2021 . Briefly, the median interquartile range (IQR) age was 46 (38–55) years for the South African and 38 (14–62) for the Mozambican study population. Of the 104 HPV16 positive samples, 58 showed acceptable DNA quality scores for inclusion (52 from South Africa and 6 from Mozambique). Of the 52 South African participants, 96% (50/52) had an abnormal cervical cytology result, with a high-grade squamous intraepithelial lesion (HSIL) (31/52. 60%) being the most common . 3.2. Phylogenetic Analyses A phylogenetic analysis of the 52 variants isolated in South Africa and 6 isolated in Mozambique aligned to reference sequences showed that 50% of all the isolates (29/58) belonged to B and C lineages (African variants). Specifically, 34% of isolates (20/58) were clustered in the C1 sublineage (AF472509) (African 1 lineage). Additionally, from the B lineage, 14% of the isolates (8/58) were clustered in the B1 sublineage (AF536180), and 2% of the isolates (1/58) were clustered in the B2 sublineage (HQ644298). Notably, 45% of all the isolates (26/58) belonged to the A lineage (European variants) and were clustered in the A1 sublineage (NC-001526). Five per cent of the isolates (3/50) belonged to the D lineage (Asian-American variants), with 3% (2/58) in the D3 sublineage (AF402678) and 2% (1/58) in the D1 sublineage (HQ644257). This study found no isolates that matched the remaining sublineages under investigation. 3.3. Relationship between Cytology and HPV16 Lineage Distribution Most of the study participants (55%) had high-grade squamous intraepithelial lesions (HSILs) as a cervical abnormality, and 45% had non-HSILs . Of the 45% (26/58) of sequences that clustered to A lineage (European variants), 24% (14/58) were collected from patients with HSILs and 21% (12/58) from non-HSIL patients. Of the 50% (29/58) of sequences that clustered to the B and C lineages (African variants), 29% (17/58) were collected from patients with HSILs and 21% (12/58) from non-HSIL patients. 3.4. LCR and E6 Regions Nucleotide Sequences All studied sequences were compared to the 16 reference sequences from the HPV16 sublineages to analyse the presence of single nucleotide polymorphism (SNPs) in the LCR and E6 regions. In the LCR, a total of 49 SNPs were detected , and all sequences showed the presence of more than one SNP. The 15 most frequently detected SNP percentages and the specific sublineages are in bold in . In the E6 region, a total of 12 SNPs were detected , and all sequences showed the presence of more than one SNP. Eight of them were the most frequently detected. The percentages and the specific sublineages are in bold in . 3.5. SNPs Distribution According to the Sequence Lineages Of the 15 most prevalent LCR SNPs, seven (G74054, G7437A, C7605, C7702T, G7750T, and C7853T) were found mostly in sequences that clustered to the A1–3 sublineages (European variants), while 8 SNPs (C7303C, A7351G, C7401A, T7585C, A7742G, C7753A, G7755A, and A7792C) were mostly found in the B and C lineages (African variants). The SNP A7792C was found strictly in the C1–4 sublineages (African 2 variants) . In the E6 region, four (G146T, T287A, A290G, and C336T) of the eight most prevalent SNPs were found in the A1–3 sublineages (European variants), and the other four (C110T, T133G, G144C, and G404A) were found in the B and C lineages (African variants). The G144C and G146T SNPs were found in the E6 PDZ domain responsible for the P53 binding. The demographical data of the South African participants can be accessed in Taku et al. 2020 , and for Mozambican participants in Maueia et al., 2021 . Briefly, the median interquartile range (IQR) age was 46 (38–55) years for the South African and 38 (14–62) for the Mozambican study population. Of the 104 HPV16 positive samples, 58 showed acceptable DNA quality scores for inclusion (52 from South Africa and 6 from Mozambique). Of the 52 South African participants, 96% (50/52) had an abnormal cervical cytology result, with a high-grade squamous intraepithelial lesion (HSIL) (31/52. 60%) being the most common . A phylogenetic analysis of the 52 variants isolated in South Africa and 6 isolated in Mozambique aligned to reference sequences showed that 50% of all the isolates (29/58) belonged to B and C lineages (African variants). Specifically, 34% of isolates (20/58) were clustered in the C1 sublineage (AF472509) (African 1 lineage). Additionally, from the B lineage, 14% of the isolates (8/58) were clustered in the B1 sublineage (AF536180), and 2% of the isolates (1/58) were clustered in the B2 sublineage (HQ644298). Notably, 45% of all the isolates (26/58) belonged to the A lineage (European variants) and were clustered in the A1 sublineage (NC-001526). Five per cent of the isolates (3/50) belonged to the D lineage (Asian-American variants), with 3% (2/58) in the D3 sublineage (AF402678) and 2% (1/58) in the D1 sublineage (HQ644257). This study found no isolates that matched the remaining sublineages under investigation. Most of the study participants (55%) had high-grade squamous intraepithelial lesions (HSILs) as a cervical abnormality, and 45% had non-HSILs . Of the 45% (26/58) of sequences that clustered to A lineage (European variants), 24% (14/58) were collected from patients with HSILs and 21% (12/58) from non-HSIL patients. Of the 50% (29/58) of sequences that clustered to the B and C lineages (African variants), 29% (17/58) were collected from patients with HSILs and 21% (12/58) from non-HSIL patients. All studied sequences were compared to the 16 reference sequences from the HPV16 sublineages to analyse the presence of single nucleotide polymorphism (SNPs) in the LCR and E6 regions. In the LCR, a total of 49 SNPs were detected , and all sequences showed the presence of more than one SNP. The 15 most frequently detected SNP percentages and the specific sublineages are in bold in . In the E6 region, a total of 12 SNPs were detected , and all sequences showed the presence of more than one SNP. Eight of them were the most frequently detected. The percentages and the specific sublineages are in bold in . Of the 15 most prevalent LCR SNPs, seven (G74054, G7437A, C7605, C7702T, G7750T, and C7853T) were found mostly in sequences that clustered to the A1–3 sublineages (European variants), while 8 SNPs (C7303C, A7351G, C7401A, T7585C, A7742G, C7753A, G7755A, and A7792C) were mostly found in the B and C lineages (African variants). The SNP A7792C was found strictly in the C1–4 sublineages (African 2 variants) . In the E6 region, four (G146T, T287A, A290G, and C336T) of the eight most prevalent SNPs were found in the A1–3 sublineages (European variants), and the other four (C110T, T133G, G144C, and G404A) were found in the B and C lineages (African variants). The G144C and G146T SNPs were found in the E6 PDZ domain responsible for the P53 binding. Globally, HPV16 is one of the most common HrHPV genotypes implicated in cervical cancer . In the sub-Saharan African region, HPV16 has been the most implicated in all cervical abnormalities . The HPV16 genotype can be divided into four main variant lineages and sixteen sublineages, differing in the whole genome sequence by less than 10% for main variants and as little as 0.5% for sublineages . The distribution of major variants around the globe is known to follow specific geographic and ethnic distribution patterns . The present study reports the HPV16 lineages according to the cervical cytology in a group of women from South Africa and Mozambique. Our findings showed that 79% (46/58) of the participants had any cervical abnormality, and 55% (31/58) were participants with HSILs. This high prevalence is corroborated by evidence from several HPV16 identification studies conducted in the region . Clearly, the European variant is known to be spread worldwide, except for Sub-Saharan Africa, where the African variants are more prevalent, with a median of 57% of the cases. This percentage is closer to our study’s finding (50%) . Furthermore, the European lineage (A lineage and A1–3 sublineages) has been implicated in several cases of cancer worldwide since the evolutionary genetic variation within HPV16 has already been linked by studies using partial sequencing to substantial differences in cervical carcinogenicity . From 58 participants analysed in our study, the B and C lineages (African 1 and 2 variants) were the most found (50%), followed by the A lineage (European variant) with 45% of the cases. According to the literature, this is a normal fact since HPV16 variants show diverse geographical and ethnic distribution . The Asian-American variant is mostly found in Central and South America, the Asian variant is principally detected in Southeast Asia, and the African in Africa and the European are the most prevalent variant in all other regions excluding Africa . The sequences of the current study belonged to samples collected from a non-homogenous group. Some of the study samples were collected from women seeking STI treatment in the STI clinics and others from women seeking cervical cancer screening in community clinics and referral hospitals without real knowledge of their cervical cytological conditions. However, looking at the distribution of the lineages according to the cytology, our study results suggest that both African (C1 sublineage) and European lineages (A1 sublineage) accumulated most of the cervical abnormality cases, mostly HSILs. It was shown previously that the European variants do not follow a uniform pattern with regard to leading abnormalities, and the A4 sublineage was linked to an increased risk of cancer compared with the A1/A2 clade . The African variants also displayed heterogeneity for disease outcomes, with the B lineage being associated with a statistically significantly reduced risk of abnormalities inducing compared with the A1/A2 sublineages . In contrast, the C lineage conferred a statistically significant elevated risk, while the D lineages were associated with a substantially higher risk of precancer/cancer compared with the A1/A2 sublineages . In this way, our findings may be because our study was exclusively related to HPV16 sequences. HPV is one of the most common sexually transmitted infections in sub-Saharan Africa . Despite these infections often being asymptomatic and clearing spontaneously, infections by high-risk genotypes such as HPV16 and HPV18 can progress to anogenital, oropharyngeal, and cervical cancers . Furthermore, individual-level risk factors such as sexual network characteristics, the higher number of sexual partners, and sexual intercourse frequency are common in our study areas, and they can influence the spread of the infection . Looking at the current study’s genealogical tree, we can see a closer relationship between some sequences (NM 135 and NM168; NM 085 and NM 039 in the C1 sublineage cluster; NM 080 and NM 131, NM133 and Nm 191; NM 088, NM 139 and BE 395 in the A1 sublineage cluster), suggesting that they could be related to the same sexual network. The most frequent SNPs in our study samples were found in the LCR, specifically in the European and African lineage isolates. Additionally, those SNPs were found in samples with HSILs as cytology abnormalities ( . : HPV16 SNPs according to the cytology). The LCR is the most variable region of the HPV genome, and it plays an important role in viral transcription and replication, as well as the persistence of the viral infection and the risk of progression to cervical cancer . Furthermore, the LCR contains important viral transcription and replication regions, including the enhancer, E2 binding characterised by containing the ACC(N)6GGT fragments and origin of replication (ori) sites . Some nucleotide changes are responsible for increased transcription activity in some HPV16 variants . In our study, three E2 binding sites (fragments 7452 to 7463, 7843 to 7854, and 7859 to 7871) in the LCR were identified. However, no SNPs were identified in any of these E2 binding sites. Different nucleotide mutations in the LCR have been reported, some of them related to altered pathways involved in viral persistence and cancer development . This can explain the high number of LCR SNPs found in our HSIL study samples. Due to the importance of LCR in the viral life cycle and reports of mutations in some nucleotide positions of the LCR, many genetic modifications have been reported that can cause changes in the viral oncogenic potency with a potentially important role in viral pathogenicity . The HPV16 genome contains two important oncogenes, E6 and E7 . The loss of regulation in these two genes is the cause of intraepithelial neoplasia development . As described previously for the LCR, our European and African lineage study samples with HSILs as a cytology abnormality showed most of the E6 gene SNPs. Moreover, as other authors have described, nucleotide modifications in this region may be related to more oncogenic variants . However, our study did not find a significant difference in terms of the frequency of the SNPs between the LCR and E6 genes for the African and European lineages. Added to this, no significant nucleotide duplication was found in any of the studied sequences that could be linked to the alterations in the variant’s pathogeneses and probable aggressivity, which subsequently led to quick cancer development. It is important to note that two SNPs in the E6 PDZ domain were identified in our study (G144C and G146T). Numerous studies have indicated that the E6 protein has many other targets in addition to inducing p53 degradation . The C-terminal PDZ-binding motif is specifically conserved among E6 proteins of HrHPVs and is essential to bind and enhance the degradation of several PDZ domain-containing proteins . Some evidence suggests that the PDZ domain-binding motif is implicated in tumorigenesis by primary human keratinocyte transformation, hyperplasia, and carcinogenesis. Also, some of the PDZ proteins are known to have tumor-suppressor functions . The normal amino acid combination in the E6 PDZ domains contains isoleucine and three leucine amino acids . Nevertheless, 24 of our study sequences with G146T SNPs were found to have a tyrosine amino acid in the place of isoleucine in the E6 PDZ domain, which could suggest an inactivation of the domain without affecting the carcinogenicity effect. Several studies mentioned some specific intratypic nucleotide polymorphisms from the sequence analysis of the E6 and E7 oncogenes among the major HPV16 variants and sub-lineages. These polymorphisms were linked to increased viral carcinogenicity . For example, in the A1 and A2 sub-lineages the E6: T350G SNP; in theA4 sub-lineage the E6: T178G and E7: A647G; in the B lineage the E6: G132C, C143G, G145T, T286A, A289G, and C335T, and E7: T789C and T795G; in the C lineage the E6: T109C, G132T, C143G, G145T, T286A, A289G, C335T, and G403G, and E7: A647G, T789C and T795G; in the D lineage the E6: G145T, T286A, A289G, C335T, T350G, and A532G, and E7: T732C, T789C, and T795G . From this group of SNPs, the E6 T350G has been found in most of the sublineages, and it was the one widely studied among the Asian and European subjects . Looking at our study of E6 SNPs, none of the previously described were found in all of the lineages and sublineages. Thus, since our study centred on the E6 and LCR genes, extensive studies on the E1, E2, E3, E4, E5, and E7 genes could provide more details regarding the presence of these SNPs in our study group and/or African subjects. Several limitations are taken into consideration in this present study. Firstly, in this study, samples collected in the national HPV screening and other STD programs in some sites of both countries were used, while the diversity of HPV genotypes in all countries’ provinces was not comprehensively studied. Secondly, this study is a cross-sectional design, with missing demographic information regarding the cancer status of some samples, mainly the ones collected in Mozambique; therefore, more detailed longitudinal studies are suggested at a molecular level. To add to this, the small number of samples analysed in this study is another limitation to take into consideration. Finally, although LCR and E6 genes are important regulatory sequences of the HPV genome, according to the literature, the accuracy of sublineage classification should be conducted from the whole genomes or marked regions, such as E2, for sublineage classification . Nevertheless, the study findings are interesting and can contribute to the implementation of data regarding the molecular epidemiology of HPV16 variants in both countries. In conclusion, the present study showed that the African and European were the most dominant HPV16 lineages. Regarding the importance of mutations in the LCR and E6 of the African and European variants in developing HSILs that cause invasive cervical cancer, women infected with these variants should be examined in future longitudinal studies to obtain further information about the oncogenic potential of these dominant variants in the study countries. |
Brain tissue iron neurophysiology and its relationship with the cognitive effects of dopaminergic modulation in children with and without ADHD | c04c1e84-838c-47ef-aa0c-f45ac918afc1 | 10372187 | Physiology[mh] | Introduction Attention-deficit/hyperactivity disorder (ADHD) is one of the most common neurodevelopmental disorders of childhood, affecting 5–10% of children worldwide . ADHD is characterized by developmentally inappropriate levels of inattention, hyperactivity, and impulsivity . These core symptoms are often accompanied by deficits in cognitive control, a set of goal-directed processes involved in regulating thoughts and behaviors . One domain of cognitive control that is typically impaired in ADHD is response inhibition ; individuals with ADHD exhibit difficulty suppressing actions that may interfere with goal-directed behaviors . These response inhibition deficits put individuals with ADHD at risk for negative long-term outcomes, including substance use disorders and criminal behavior . Studies of the neural etiology of response inhibition deficits in ADHD have focused on the neurotransmitter dopamine, given its established role in modulating frontostriatal circuits important for response inhibition . It is thought that dopamine’s actions in distinct cortical-basal ganglia loops redirect information from ventromedial frontostriatal networks involved in reward processing to dorsolateral frontostriatal networks involved in cognitive control . Previous neuroimaging research suggests that dopamine neurotransmission is dysfunctional in individuals with ADHD . Positron emission tomography (PET) studies have shown that dopamine metabolism, receptor availability, and transporter function is disrupted in adults and children with ADHD . Magnetic resonance imaging (MRI) studies have further shown that children and adults with ADHD exhibit reduced activation and functional connectivity in frontostriatal regions and networks during tasks that probe attention and response inhibition . Previous research has demonstrated that modulation of the dopaminergic system improves the symptoms and cognitive deficits related to ADHD, including response inhibition . In fact, the receipt of rewards during laboratory tasks has been shown to improve response inhibition performance in individuals with ADHD and in age- and sex-matched typically developing (TD) controls . This reward-related reinforcement increases synaptic availability of dopamine in the striatum . Further, the psychostimulant methylphenidate (MPH), the current first-line treatment for ADHD, is an indirect dopamine and norepinephrine agonist. Due to MPH’s dopamine agonism via blockage of dopamine transporters, extracellular levels of striatal dopamine increase following MPH administration . Examining how response inhibition performance changes following the receipt of rewards and MPH administration, and how these performance changes relate to indirect measures of dopamine availability, will therefore shed light on the neurobiological mechanisms through which dopaminergic modulation improves response inhibition in both individuals with ADHD and TD children. Research assessing dopaminergic functioning and dopamine availability in humans in vivo is limited, especially in children, because techniques such as PET involve the use of radiation . One way to circumvent this limitation and indirectly assess dopamine availability in the brain is with magnetic resonance-based measurements of brain tissue iron. Iron is a cofactor of the rate-limiting enzyme tyrosine hydroxylase and of monoamine oxidase, both of which are critical for dopamine synthesis . In the human brain, iron is preferentially sequestered in regions that make up the brain’s dopaminergic reward pathway, including the basal ganglia and thalamus . These regions are also critical components of the aforementioned frontostriatal circuitry involved in response inhibition and reward-related reinforcement . Since the presence of iron increases the rate of T2* relaxation, quantifying the T2* relaxation rate (i.e., R2*) of functional MRI (fMRI) data can be used to measure basal ganglia tissue iron levels . Indeed, previous work has employed this approach to investigate basal ganglia iron content by estimating the relative T2* relaxation rate across the brain using existing fMRI data . Recent neuroimaging work using PET has confirmed that midbrain tissue iron measurements derived by quantifying the T2* relaxation rate are correlated with dopamine availability in the striatum, specifically with presynaptic vesicular storage of dopamine . Few studies to date have leveraged brain tissue iron measurements to probe dopaminergic function in individuals with ADHD. These studies have found that individuals with ADHD exhibit reduced brain tissue iron levels in the basal ganglia and thalamus relative to their age- and sex-matched TD peers , which is in line with other neuroimaging work finding reduced midbrain dopamine activity in ADHD . None of these studies have examined the relationship between dopamine-related brain tissue iron neurophysiology and response inhibition performance in individuals with ADHD. Even so, research suggests that greater levels of brain tissue iron are associated with better cognitive ability in TD children , adolescents, and young adults , as well as with responsivity to the receipt of rewards during a response inhibition task in TD adolescents and adults . However, the question of whether these relationships are consistent in individuals with ADHD, and whether dopamine-related physiology modulates the response to dopaminergic modulation in ADHD, remains. The overarching goal of this pre-registered project, therefore, is to investigate brain tissue iron content in the basal ganglia and thalamus using time-averaged normalized T2*-weighted (nT2*w) signal and to assess whether variability in the nT2*w measurement is related to responsivity to dopaminergic modulation in children with ADHD and TD children. Here, ‘dopaminergic modulation’ refers to reward reinforcement or administration of MPH, and responsivity to this modulation will be operationalized as improvement on tasks probing response inhibition. As prior work has found that individuals with ADHD have lower basal ganglia and thalamic tissue iron levels relative to their TD peers , we predict that individuals with ADHD will have higher nT2*w signal, reflecting reduced brain tissue iron levels, in these regions. Based on previous work in TD individuals , we hypothesize that individuals with lower nT2*w signal, reflecting greater tissue iron levels, will exhibit better response inhibition, as well as greater improvements in response inhibition following the administration of rewards and MPH. Materials and methods 2.1 Participants The dataset for the proposed study is a subset of 65 participants with ADHD and TD participants between the ages of 8–12 years who participated in a larger study assessing the effects of MPH administration on functional brain network organization (ADHD: n = 36, 17 F, mean age = 9.70 y; TD: n = 29, 12 F, mean age = 10.23 y). Participants were selected for inclusion in the current study based on fMRI and behavioral data quality. See Motion-related quality assurance and Go/no-go tasks and measures for details about data quality criteria for fMRI and behavioral data, respectively. General exclusion criteria for the sample included full scale intelligence quotient (FSIQ) less than 85 as determined using the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V; ), Word Reading subtest score less than 85 from the Wechsler Individual Achievement Test, Third Edition (WIAT-III; ), or if any of the following conditions were met: (a) diagnosis of intellectual disability, developmental speech/language disorder, reading disability, autism spectrum disorder, or a pervasive developmental disorder; (b) visual or hearing impairment; (c) neurologic disorder (e.g., epilepsy, cerebral palsy, traumatic brain injury, Tourette syndrome); (d) medical contraindication to MRI (e.g., implanted electrical devices, dental braces). Diagnostic status was assessed using the Diagnostic Interview Schedule for Children Version IV (DISC-IV; ) and the Conners 3rd Edition Parent and Teacher Rating Scales (Conners-3; ). Participants were included in the ADHD group if they met: (a) full diagnostic criteria for ADHD on the DISC-IV, or (b) intermediate diagnostic criteria (i.e., subthreshold with impairment) on the DISC-IV and full diagnostic criteria for ADHD on the Conners-3 Parent or Teacher Rating Scales. They were additionally required to be psychostimulant medication naïve. Participants were included in the TD group if they met the above criteria and did not meet diagnostic criteria for any psychiatric disorders, including ADHD, on the DISC-IV. Additionally, TD participants were required to have three or fewer symptoms of inattention and of hyperactivity/impulsivity on the DISC-IV. Finally, TD participants were required to have no history or presence of developmental disorders, and no history or presence of ADHD in first-degree relatives. A demographic summary including age, FSIQ as determined using the WISC-V, Word Reading subtest score from the WIAT-III, sex, race, family income, and parental education, is provided in . 2.2 Procedures All procedures for the parent study were reviewed and approved by the Institutional Review Board at the University of North Carolina at Chapel Hill. Written parental consent and participant (child) assent were obtained for each participant included in the larger study. Only procedures relevant to the proposed analyses will be described here. Participants underwent two fMRI sessions approximately one week apart (mean = 10.4 days; standard deviation = 7.4 days; range = 3–42 days). Since attention fluctuates throughout the day in school-aged children , most sessions were scheduled at approximately the same time of day (i.e., between 8 am – 12 pm). Seven TD sessions (12.5%) and 16 ADHD sessions (23%) were scheduled in the afternoon. There was not a significant difference across groups in terms of when sessions were scheduled ( c 2 (1) = 2.4, p = .13). Two participants with ADHD and two TD participants only participated in a single session. Each MRI session included an MPRAGE anatomical T1-weighted scan and the following T2*-weighted echo-planar imaging (EPI) functional scans: two resting-state scans (five minutes each), two standard go/no-go task scans (6.5 min each), and four rewarded go/no-go task scans (six minutes each), administered in that order. During the resting-state scans, participants viewed a white fixation cross on a gray background and were instructed to lie quietly, but awake, in the MRI scanner. For the go/no-go task scans, stimuli were projected onto a screen visible to the participant through a mirror mounted to the head coil and an MRI-safe handheld button box was used to record task responses. Both tasks were presented using PsychoPy v1.85.1 . Both fMRI sessions were identical for participants in the TD group. In the case that a TD child had usable fMRI and behavioral data from both sessions, data from the session with the highest percentage of fMRI volumes retained after motion scrubbing were used, based on fMRI data exclusion criteria (See Motion-related quality assurance ). Participants in the ADHD group participated in a double-blind, randomized, placebo-controlled crossover MPH challenge. On each day of testing, participants in the ADHD group received 0.30 mg/kg MPH or placebo, rounded up to the nearest 5 mg, orally approximately one hour before scanning. Aside from the MPH challenge, fMRI sessions for participants in the ADHD group were identical. Since we were interested in how intrinsic, baseline brain tissue iron levels are related to improvements in response inhibition that follow dopaminergic modulation, such as the receipt of rewards during cognitive tasks or the administration of MPH, fMRI data collected following placebo administration were used to assess brain tissue iron in all analyses for children with ADHD. Furthermore, it has been proposed that brain tissue iron estimates derived from fMRI are reflective of stable properties of brain tissue , and we confirmed that this was the case in our data, both across sessions in TD children (i.e., across a weeks-long period) and between placebo and MPH sessions in children with ADHD (i.e., after a single dose of MPH; see , Supplementary analyses and Supplementary results, , ). In the analyses that examined responsivity to rewards only, behavioral data collected following placebo administration were used. In the analyses that examined the effects of MPH administration, behavioral data collected following both placebo and MPH administration were used. All analyses here were pre-registered and the protocol was submitted to Open Science Framework prior to data analysis. 2.3 Go/no-go tasks and measures Two versions of a go/no-go task, a standard and a rewarded version, were administered. The versions of the go/no-go task we used were adapted from one that was initially designed to have a high proportion of errors . In both versions, eight sports balls were used as the stimuli. Two of the eight sports balls were randomly selected for each participant as ‘no-go’ stimuli. The other six sports balls were ‘go’ stimuli. Participants were instructed to respond as quickly as possible with a button press using their right index finger following the presentation of ‘go’ sports balls (73.4% of trials) and to withhold responding when ‘no-go’ sports balls were presented (26.6% of trials). The standard go/no-go task ( a ) consisted of two runs of 128 trials each, for a total of 256 trials (188 go trials and 68 no-go trials). Stimulus order was pseudorandom such that between two and four go trials preceded each no-go trial. There were 16 instances of two consecutive go trials, 10 instances of three consecutive go trials, and eight instances of four consecutive go trials, randomized for each run. Each stimulus was presented for 600 ms, with a jittered interstimulus interval (ISI) of 1250–3250 ms selected from a uniform distribution. In the rewarded go/no-go task ( b ), stimuli and timing were identical to the standard go/no-go task, with the addition of feedback after each response. Feedback (coins for correct trials and empty circles for incorrect trials) was presented for 600 ms after a brief delay that was jittered identically to the ISI between trials. Due to the longer trial length, the rewarded go/no-go task consisted of four runs of 64 trials each, again for a total of 256 trials. The instructions for the rewarded go/no-go task were identical to the standard version, but participants were also told that they would be rewarded for correct, fast responses on go trials (≤650 ms) and for correct non-responses on no-go trials. Participants received one penny per correct/fast go trial and five pennies per correct no-go trial. Participants received the money they accumulated over the four runs of the rewarded go/no-go task at the end of each visit. Individual runs of the standard and rewarded go/no-go tasks were excluded for an omission rate on go trials that was greater than three standard deviations from the mean omission error rate, separately for each task, as determined using standard and rewarded go/no-go task data collected from participants with ADHD following placebo administration and TD participants. Specifically, individual runs of the standard go/no-go task were excluded if the proportion of omission errors exceeded 0.44 and individual runs of the rewarded go/no-go task were excluded if the proportion of omission errors exceeded 0.39. This was to ensure that participants were awake and actively engaging with the task. Additionally, individual go trials with response times faster than 200 ms were excluded from analyses, as exceptionally fast response times are indicative of anticipatory responses . Behavioral performance was indexed using the proportion of commission errors and response time variability. The proportion of commission errors was calculated as the proportion of no-go trials on which a response was made. Response time variability was quantified using tau, which is derived from the exponential-Gaussian distributional model of response times and assesses infrequent, extremely slow response times that are indicative of attention lapses . Tau quantifies the mean and standard deviation of the exponential component of the response time distribution. To calculate tau, the timefit function from the ‘retimes’ package in R was used to bootstrap the response times associated with correct go trials 5000 times, and the mean and standard deviation of the exponential distribution of response times was calculated. For summary statistics of behavioral performance, see in . The distributions of the proportion of commission errors and tau were assessed for normality using the Shapiro-Wilk test of normality ( α = 0.05) . Tau was not normally distributed, and the proportion of commission errors on the standard go/no-go task in all participants and following MPH administration in participants with ADHD was not normally distributed (all p-values < 0.04; see , ). Therefore, log-transformed values of both the proportion of commission errors and tau were used in all analyses. 2.4 MRI data acquisition All neuroimaging data were collected at the University of North Carolina at Chapel Hill Biomedical Research Imaging Center. Data were acquired with a 32-channel head coil on a 3-Tesla Siemens MAGNETOM Prisma-fit whole-body MRI machine. High resolution T1-weighted anatomical scans were acquired using a magnetization prepared rapid acquisition gradient echo (MPRAGE) sequence (TR = 2400 ms, TE = 2.22 ms, FA = 8°, field of view 256 × 256 mm, 208 slices, resolution = 0.8 mm × 0.8 mm × 0.8 mm). Whole-brain T2*-weighted fMRI data were acquired using an echo-planar imaging (EPI) sequence (39 axial slices parallel to the AC–PC line, slice thickness 3 mm, interslice distance = 3.3 mm, TR = 2000 ms, TE = 25 ms, FA = 77°, echo spacing = 0.54 ms, field of view 230 mm × 230 mm, voxel dimensions: 2.9 mm × 2.9 mm × 3.0 mm). For the resting-state scan, 300 timepoints were collected (150 timepoints and five minutes per each of two runs). A total of 390 timepoints were collected during the standard go/no-go task (195 timepoints and 6.5 min per each of two runs), and 740 timepoints were collected during the rewarded go/no-go task (185 timepoints and 6.17 min per each of four runs). 2.5 fMRIPrep anatomical and functional data preprocessing The following text has been adapted from the fMRIPrep boilerplate text that is automatically generated with the express intention that it is used in manuscripts. It is released under the CC0 license. All T1w images were corrected for intensity non-uniformity (INU) with N4BiasFieldCorrection and distributed with ANTs 2.2.0 ( , RRID:SCR_004757). The T1w reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as the target template. Brain tissue segmentation of cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM) was performed on the brain-extracted T1w using FAST (FSL 5.0.9, RRID:SCR_002823, ). A T1w-reference map was computed after registration and INU-correction of the T1w images used mri_robust_template (FreeSurfer 6.0.1, ). Brain surfaces were reconstructed using recon-all (FreeSurfer 6.0.1, RRID:SCR_001847, ), and the brain mask estimated previously was refined with a custom variation of the method to reconcile ANTs- and FreeSurfer-derived segmentations of the cortical gray matter of Mindboggle (RRID:SCR_002438, ). Volume-based spatial normalization to standard MNI152NLin2009cAsym space was performed via nonlinear registration with antsRegistration (ANTs 2.2.0), using brain-extracted versions of both the T1w reference and the T1w template. ICBM 152 Nonlinear Asymmetrical template version 2009c was selected for spatial normalization ( , RRID:SCR_008796; TemplateFlow ID: MNI152NLin2009cAsym). For each subject’s BOLD runs (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version was generated using a custom methodology of fMRIPrep. A deformation field to correct for susceptibility distortions was estimated based on fMRIPrep’s fieldmap-less approach. The deformation field resulted from coregistering the BOLD reference to the same-subject T1w reference with its intensity inverted . Registration was performed with antsRegistration (ANTs 2.2.0), and the process was regularized by constraining deformation to be nonzero only along the phase-encoding direction and modulated with an average fieldmap template . Based on the estimated susceptibility distortion, an unwarped BOLD reference was calculated for a more accurate coregistration with the anatomical reference. The BOLD reference was then coregistered to the T1w reference using bbregister (FreeSurfer), which implements boundary-based registration . Coregistration was configured with six degrees of freedom. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) were estimated using MCFLIRT (FSL 5.0.9, ). BOLD runs were slice-time corrected using 3dTshift from AFNI 20160207 ( , RRID:SCR_005927). The BOLD timeseries (including slice-timing correction) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. These resampled BOLD timeseries will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD. The preprocessed BOLD timeseries were then resampled into standard space, generating a preprocessed BOLD run in MNI152NLin2009cAsym space. A reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Following the processing and resampling steps, confounding framewise displacement (FD) timeseries were calculated based on the preprocessed BOLD for each functional run, using implementations in Nipype (following the definition by ). Many internal operations of fMRIPrep use Nilearn 0.5.2 ( , RRID:SCR_001362), mostly within the functional processing workflow. For more details of the pipeline, see the section corresponding to workflows in fMRIPrep’s documentation. 2.6 Normalization and time-averaging of T2*‐weighted data In this set of analyses, BOLD signal was not used since we did not perform a timeseries analyses. Instead, we were interested in iron levels, which are a time-invariant property of brain tissue quantified from T2*‐weighted data . To quantify normalized T2*‐weighted (nT2*w) signal, first high motion timepoints were removed and subsequently excluded from analyses. High motion timepoints were defined as those that exceeded 0.3 mm FD . Then, to correct for scanner drift and potential differences between participants and MRI runs, each volume was normalized to the whole brain mean. The nT2*w signal from each voxel was then aggregated across all remaining volumes of the resting-state and task runs using the median, separately for each participant. The median was used to reduce the impact of outlier volumes . This resulted in a voxel-wise map of median nT2*w signal for each participant. As the presence of iron is inversely related to nT2*w signal, reduced nT2*w signal indicates increased brain tissue iron and therefore increased intrinsic DA availability. 2.7 ROI selection Bilateral caudate, putamen, globus pallidus, accumbens, and thalamus were selected as regions of interest. These regions were selected for two reasons. First, they are dopamine rich, and iron is colocalized with dopamine in the brain . Second, reduced brain tissue iron has been observed in each of these regions in children with ADHD relative to age- and sex-matched TD peers . ROIs were defined using the Harvard-Oxford subcortical atlas . ROIs only included voxels with at least 50% probability of belonging to each specific brain region. NT2*w signal was averaged across all voxels in each ROI, resulting in a single value per ROI. We first analyzed nT2*w signal across a whole basal ganglia ROI and a thalamus ROI. The basal ganglia ROI combined Harvard-Oxford atlas masks of bilateral caudate, putamen, globus pallidus, and accumbens into a single basal ganglia ROI. Next, to assess the regional specificity of basal ganglia nT2*w signal and its relationships with performance, we extracted nT2*w signal from bilateral caudate, putamen, globus pallidus, and accumbens separately. 2.8 Motion-related quality assurance Any participant with at least 170 volumes remaining across all fMRI runs after excluding high motion timepoints was included in the current study . One participant with ADHD was excluded as their fMRI data following placebo administration did not meet this criterion. To ensure that nT2*w signal was not significantly impacted by motion, we correlated mean FD across all runs with the nT2*w signal for each region of interest (ROI) using Pearson correlations ( α = 0.05). There were no significant relationships between mean FD and nT2*w signal in any of the ROIs we examined (all corrected p-values > 0.48; see , ). 2.9 Analyses Before conducting each of the below analyses, power analyses were performed to determine statistical power to detect the expected effects. See , Expected Power for details. Group comparisons of demographic variables : To ensure that the ADHD and TD groups did not differ on demographic variables, including sex, race, family income, and parental education, we used two-sample chi-squared tests to compare groups on each of these variables. We additionally conducted Welch’s t-tests for unequal variance to compare groups on age, FSIQ, and Word Reading scores. Tests were FDR-corrected for seven comparisons at p < .05. Replication analysis – comparing nT2*w signal in children with ADHD and TD children : We first replicated previous work and examined whether there were group differences between children with ADHD and TD children in nT2*w signal in the whole basal ganglia and thalamus . We used separate linear regression models covarying for age and sex for each of the two ROIs, as follows: nT2*w signal ∼ group + age + sex Models were FDR-corrected for two comparisons at p < . 05 . We also determined whether there were group differences in nT2*w signal in specific basal ganglia subregions (i.e., caudate, putamen, globus pallidus, and accumbens). In this secondary analysis, we again used separate linear regression models covarying for age and sex for each of the four ROIs. Here, results were FDR-corrected for four comparisons at p < .05 . One participant with ADHD was not included in this analysis as their fMRI data following placebo administration had fewer than 170 volumes included after excluding high-motion timepoints (See Motion-related quality assurance ). This left 64 participants (n ADHD = 35, n TD = 29) in this analysis. Relationship between nT2*w signal and response inhibition : To determine the relationship between nT2*w signal and response inhibition in children with ADHD and TD children, we used linear regression models covarying for age and sex using the following equation: response inhibition performance ∼ nT2*w signal + age + sex For the primary analysis, we used separate linear regression models for each response inhibition performance measure (commission errors, tau) and ROI (whole basal ganglia and thalamus) for a total of four models (two response inhibition measures x two ROIs). Statistical tests were FDR-corrected for two comparisons separately for each response inhibition measure at p < .05, as we included two ROIs in the primary analysis . In a secondary analysis that assessed regional specificity of the relationship between nT2*w signal and response inhibition performance, we extracted nT2*w signal from each basal ganglia ROI separately. We used separate linear regression models for each response inhibition performance measure (commission errors, tau) and basal ganglia ROI (caudate, putamen, globus pallidus, and accumbens) for a total of eight models (two response inhibition measures x four ROIs). Here, statistical tests were FDR-corrected for four comparisons separately for each response inhibition measure at p < .05, as we included four basal ganglia ROIs . Given literature indicating that the relationship between brain tissue iron and response inhibition performance is strongest in individuals with high levels of brain tissue iron , and that individuals with ADHD have reduced brain tissue iron relative to TD individuals , we performed an additional analysis in which we examined whether the relationship between nT2*w signal and response inhibition performance differed as a function of diagnostic group. As such, we implemented linear regression models wherein response inhibition performance was predicted by nT2*w signal, diagnostic group, and the interaction between nT2*w signal and diagnostic group, covarying for age and sex as follows: response inhibition performance ∼ nT2*w signal + group + nT2*w signal*group + age + sex Separate linear regression models for each response inhibition measure and ROI were used, and groups of models were FDR-corrected separately at p < .05 as above (i.e., two corrections per response inhibition measure for whole basal ganglia and thalamus in the primary analysis; four corrections per response inhibition measure for caudate, putamen, globus pallidus, and accumbens in the secondary analysis). One additional participant was not considered for inclusion in this analysis due to inconsistent presentation of no-go stimuli (n = 1, TD). This left 63 participants (n ADHD = 35, n TD = 28) in this analysis. Relationship between nT2*w signal and responsivity to reward : To examine whether variability in nT2*w signal predicts responsivity to reward in children with ADHD and TD children, change in performance from the standard go/no-go task to the rewarded go/no-go task was calculated for all participants by subtracting performance measures on the rewarded go/no-go task from those on the standard go/no-go task, such that higher values (i.e., more positive) reflected greater improvements in task performance. Linear regression models covarying for age and sex were used to relate nT2*w signal to change in response inhibition performance separately for each response inhibition measure and ROI using the following equation: ∆ response inhibition performance ∼ nT2*w signal + age + sex As in the analyses above, separate linear regression models for each response inhibition measure and ROI were used. Again, groups of models were FDR-corrected separately at p < .05 . Specifically, for the primary analysis, two corrections per response inhibition measure (commission errors, tau) were made for the whole basal ganglia and thalamus. In the secondary analysis, four corrections per response inhibition measure were made for the basal ganglia subregions (i.e., caudate, putamen, globus pallidus, and accumbens). We also performed an additional analysis that examined whether the relationship between nT2*w signal and responsivity to reward differed as a function of diagnostic group. As such, we implemented linear regression models wherein change in response inhibition performance was predicted by nT2*w signal, diagnostic group, and the interaction between nT2*w signal and diagnostic group, covarying for age and sex as follows: ∆response inhibition performance ∼ nT2*w signal + group + nT2*w signal*group + age + sex Separate linear regression models covarying for age and sex for each response inhibition measure and ROI were used. Models were FDR-corrected in the same way as described in the previous analyses at p < .05 . That is, for each response inhibition measure (commission errors, tau) two corrections were made in the primary analysis that examined nT2*w signal in the basal ganglia and thalamus, and four corrections were made in the secondary analysis that examined nT2*w signal in the four basal ganglia subregions (i.e., caudate, putamen, globus pallidus, and accumbens). Two additional participants were not considered for inclusion in this analysis for missing rewarded go/no-go data (n = 1, TD) and incorrect button presses during the rewarded go/no-go task (n = 1, ADHD), leaving 61 participants (n ADHD = 34, n TD = 27) in this analysis. Relationship between nT2*w signal and responsivity to MPH : To investigate whether variability in nT2*w signal predicts responsivity to MPH in children with ADHD, change in standard go/no-go performance from placebo to MPH was calculated by subtracting performance measures on MPH from those on placebo. Here, higher values (i.e., more positive) indicate greater improvement of performance following MPH. Linear regression models covarying for age and sex were used to relate nT2*w signal to change in response inhibition performance separately for each response inhibition measure and ROI using the following equation: ∆ response inhibition performance ∼ nT2*w signal + age + sex As in the previous analyses, statistical tests were FDR-corrected separately for each response inhibition measure at p < .05 . In the primary analysis, statistical tests were corrected for two comparisons per response inhibition measure (commission errors, tau), as there were two ROIs (whole basal ganglia and thalamus). In the secondary analysis, statistical tests were corrected for four comparisons per response inhibition measure, as there were four basal ganglia subregion ROIs (caudate, putamen, globus pallidus, and accumbens). Three participants with ADHD were not considered for inclusion in this analysis for missing standard go/no-go data on placebo (n = 1) and on MPH (n = 2), leaving 33 participants in this analysis. For all linear regression models, standardized betas are reported in the Results section. Participants The dataset for the proposed study is a subset of 65 participants with ADHD and TD participants between the ages of 8–12 years who participated in a larger study assessing the effects of MPH administration on functional brain network organization (ADHD: n = 36, 17 F, mean age = 9.70 y; TD: n = 29, 12 F, mean age = 10.23 y). Participants were selected for inclusion in the current study based on fMRI and behavioral data quality. See Motion-related quality assurance and Go/no-go tasks and measures for details about data quality criteria for fMRI and behavioral data, respectively. General exclusion criteria for the sample included full scale intelligence quotient (FSIQ) less than 85 as determined using the Wechsler Intelligence Scale for Children, Fifth Edition (WISC-V; ), Word Reading subtest score less than 85 from the Wechsler Individual Achievement Test, Third Edition (WIAT-III; ), or if any of the following conditions were met: (a) diagnosis of intellectual disability, developmental speech/language disorder, reading disability, autism spectrum disorder, or a pervasive developmental disorder; (b) visual or hearing impairment; (c) neurologic disorder (e.g., epilepsy, cerebral palsy, traumatic brain injury, Tourette syndrome); (d) medical contraindication to MRI (e.g., implanted electrical devices, dental braces). Diagnostic status was assessed using the Diagnostic Interview Schedule for Children Version IV (DISC-IV; ) and the Conners 3rd Edition Parent and Teacher Rating Scales (Conners-3; ). Participants were included in the ADHD group if they met: (a) full diagnostic criteria for ADHD on the DISC-IV, or (b) intermediate diagnostic criteria (i.e., subthreshold with impairment) on the DISC-IV and full diagnostic criteria for ADHD on the Conners-3 Parent or Teacher Rating Scales. They were additionally required to be psychostimulant medication naïve. Participants were included in the TD group if they met the above criteria and did not meet diagnostic criteria for any psychiatric disorders, including ADHD, on the DISC-IV. Additionally, TD participants were required to have three or fewer symptoms of inattention and of hyperactivity/impulsivity on the DISC-IV. Finally, TD participants were required to have no history or presence of developmental disorders, and no history or presence of ADHD in first-degree relatives. A demographic summary including age, FSIQ as determined using the WISC-V, Word Reading subtest score from the WIAT-III, sex, race, family income, and parental education, is provided in . Procedures All procedures for the parent study were reviewed and approved by the Institutional Review Board at the University of North Carolina at Chapel Hill. Written parental consent and participant (child) assent were obtained for each participant included in the larger study. Only procedures relevant to the proposed analyses will be described here. Participants underwent two fMRI sessions approximately one week apart (mean = 10.4 days; standard deviation = 7.4 days; range = 3–42 days). Since attention fluctuates throughout the day in school-aged children , most sessions were scheduled at approximately the same time of day (i.e., between 8 am – 12 pm). Seven TD sessions (12.5%) and 16 ADHD sessions (23%) were scheduled in the afternoon. There was not a significant difference across groups in terms of when sessions were scheduled ( c 2 (1) = 2.4, p = .13). Two participants with ADHD and two TD participants only participated in a single session. Each MRI session included an MPRAGE anatomical T1-weighted scan and the following T2*-weighted echo-planar imaging (EPI) functional scans: two resting-state scans (five minutes each), two standard go/no-go task scans (6.5 min each), and four rewarded go/no-go task scans (six minutes each), administered in that order. During the resting-state scans, participants viewed a white fixation cross on a gray background and were instructed to lie quietly, but awake, in the MRI scanner. For the go/no-go task scans, stimuli were projected onto a screen visible to the participant through a mirror mounted to the head coil and an MRI-safe handheld button box was used to record task responses. Both tasks were presented using PsychoPy v1.85.1 . Both fMRI sessions were identical for participants in the TD group. In the case that a TD child had usable fMRI and behavioral data from both sessions, data from the session with the highest percentage of fMRI volumes retained after motion scrubbing were used, based on fMRI data exclusion criteria (See Motion-related quality assurance ). Participants in the ADHD group participated in a double-blind, randomized, placebo-controlled crossover MPH challenge. On each day of testing, participants in the ADHD group received 0.30 mg/kg MPH or placebo, rounded up to the nearest 5 mg, orally approximately one hour before scanning. Aside from the MPH challenge, fMRI sessions for participants in the ADHD group were identical. Since we were interested in how intrinsic, baseline brain tissue iron levels are related to improvements in response inhibition that follow dopaminergic modulation, such as the receipt of rewards during cognitive tasks or the administration of MPH, fMRI data collected following placebo administration were used to assess brain tissue iron in all analyses for children with ADHD. Furthermore, it has been proposed that brain tissue iron estimates derived from fMRI are reflective of stable properties of brain tissue , and we confirmed that this was the case in our data, both across sessions in TD children (i.e., across a weeks-long period) and between placebo and MPH sessions in children with ADHD (i.e., after a single dose of MPH; see , Supplementary analyses and Supplementary results, , ). In the analyses that examined responsivity to rewards only, behavioral data collected following placebo administration were used. In the analyses that examined the effects of MPH administration, behavioral data collected following both placebo and MPH administration were used. All analyses here were pre-registered and the protocol was submitted to Open Science Framework prior to data analysis. Go/no-go tasks and measures Two versions of a go/no-go task, a standard and a rewarded version, were administered. The versions of the go/no-go task we used were adapted from one that was initially designed to have a high proportion of errors . In both versions, eight sports balls were used as the stimuli. Two of the eight sports balls were randomly selected for each participant as ‘no-go’ stimuli. The other six sports balls were ‘go’ stimuli. Participants were instructed to respond as quickly as possible with a button press using their right index finger following the presentation of ‘go’ sports balls (73.4% of trials) and to withhold responding when ‘no-go’ sports balls were presented (26.6% of trials). The standard go/no-go task ( a ) consisted of two runs of 128 trials each, for a total of 256 trials (188 go trials and 68 no-go trials). Stimulus order was pseudorandom such that between two and four go trials preceded each no-go trial. There were 16 instances of two consecutive go trials, 10 instances of three consecutive go trials, and eight instances of four consecutive go trials, randomized for each run. Each stimulus was presented for 600 ms, with a jittered interstimulus interval (ISI) of 1250–3250 ms selected from a uniform distribution. In the rewarded go/no-go task ( b ), stimuli and timing were identical to the standard go/no-go task, with the addition of feedback after each response. Feedback (coins for correct trials and empty circles for incorrect trials) was presented for 600 ms after a brief delay that was jittered identically to the ISI between trials. Due to the longer trial length, the rewarded go/no-go task consisted of four runs of 64 trials each, again for a total of 256 trials. The instructions for the rewarded go/no-go task were identical to the standard version, but participants were also told that they would be rewarded for correct, fast responses on go trials (≤650 ms) and for correct non-responses on no-go trials. Participants received one penny per correct/fast go trial and five pennies per correct no-go trial. Participants received the money they accumulated over the four runs of the rewarded go/no-go task at the end of each visit. Individual runs of the standard and rewarded go/no-go tasks were excluded for an omission rate on go trials that was greater than three standard deviations from the mean omission error rate, separately for each task, as determined using standard and rewarded go/no-go task data collected from participants with ADHD following placebo administration and TD participants. Specifically, individual runs of the standard go/no-go task were excluded if the proportion of omission errors exceeded 0.44 and individual runs of the rewarded go/no-go task were excluded if the proportion of omission errors exceeded 0.39. This was to ensure that participants were awake and actively engaging with the task. Additionally, individual go trials with response times faster than 200 ms were excluded from analyses, as exceptionally fast response times are indicative of anticipatory responses . Behavioral performance was indexed using the proportion of commission errors and response time variability. The proportion of commission errors was calculated as the proportion of no-go trials on which a response was made. Response time variability was quantified using tau, which is derived from the exponential-Gaussian distributional model of response times and assesses infrequent, extremely slow response times that are indicative of attention lapses . Tau quantifies the mean and standard deviation of the exponential component of the response time distribution. To calculate tau, the timefit function from the ‘retimes’ package in R was used to bootstrap the response times associated with correct go trials 5000 times, and the mean and standard deviation of the exponential distribution of response times was calculated. For summary statistics of behavioral performance, see in . The distributions of the proportion of commission errors and tau were assessed for normality using the Shapiro-Wilk test of normality ( α = 0.05) . Tau was not normally distributed, and the proportion of commission errors on the standard go/no-go task in all participants and following MPH administration in participants with ADHD was not normally distributed (all p-values < 0.04; see , ). Therefore, log-transformed values of both the proportion of commission errors and tau were used in all analyses. MRI data acquisition All neuroimaging data were collected at the University of North Carolina at Chapel Hill Biomedical Research Imaging Center. Data were acquired with a 32-channel head coil on a 3-Tesla Siemens MAGNETOM Prisma-fit whole-body MRI machine. High resolution T1-weighted anatomical scans were acquired using a magnetization prepared rapid acquisition gradient echo (MPRAGE) sequence (TR = 2400 ms, TE = 2.22 ms, FA = 8°, field of view 256 × 256 mm, 208 slices, resolution = 0.8 mm × 0.8 mm × 0.8 mm). Whole-brain T2*-weighted fMRI data were acquired using an echo-planar imaging (EPI) sequence (39 axial slices parallel to the AC–PC line, slice thickness 3 mm, interslice distance = 3.3 mm, TR = 2000 ms, TE = 25 ms, FA = 77°, echo spacing = 0.54 ms, field of view 230 mm × 230 mm, voxel dimensions: 2.9 mm × 2.9 mm × 3.0 mm). For the resting-state scan, 300 timepoints were collected (150 timepoints and five minutes per each of two runs). A total of 390 timepoints were collected during the standard go/no-go task (195 timepoints and 6.5 min per each of two runs), and 740 timepoints were collected during the rewarded go/no-go task (185 timepoints and 6.17 min per each of four runs). fMRIPrep anatomical and functional data preprocessing The following text has been adapted from the fMRIPrep boilerplate text that is automatically generated with the express intention that it is used in manuscripts. It is released under the CC0 license. All T1w images were corrected for intensity non-uniformity (INU) with N4BiasFieldCorrection and distributed with ANTs 2.2.0 ( , RRID:SCR_004757). The T1w reference was then skull-stripped with a Nipype implementation of the antsBrainExtraction.sh workflow (from ANTs), using OASIS30ANTs as the target template. Brain tissue segmentation of cerebrospinal fluid (CSF), white matter (WM), and gray matter (GM) was performed on the brain-extracted T1w using FAST (FSL 5.0.9, RRID:SCR_002823, ). A T1w-reference map was computed after registration and INU-correction of the T1w images used mri_robust_template (FreeSurfer 6.0.1, ). Brain surfaces were reconstructed using recon-all (FreeSurfer 6.0.1, RRID:SCR_001847, ), and the brain mask estimated previously was refined with a custom variation of the method to reconcile ANTs- and FreeSurfer-derived segmentations of the cortical gray matter of Mindboggle (RRID:SCR_002438, ). Volume-based spatial normalization to standard MNI152NLin2009cAsym space was performed via nonlinear registration with antsRegistration (ANTs 2.2.0), using brain-extracted versions of both the T1w reference and the T1w template. ICBM 152 Nonlinear Asymmetrical template version 2009c was selected for spatial normalization ( , RRID:SCR_008796; TemplateFlow ID: MNI152NLin2009cAsym). For each subject’s BOLD runs (across all tasks and sessions), the following preprocessing was performed. First, a reference volume and its skull-stripped version was generated using a custom methodology of fMRIPrep. A deformation field to correct for susceptibility distortions was estimated based on fMRIPrep’s fieldmap-less approach. The deformation field resulted from coregistering the BOLD reference to the same-subject T1w reference with its intensity inverted . Registration was performed with antsRegistration (ANTs 2.2.0), and the process was regularized by constraining deformation to be nonzero only along the phase-encoding direction and modulated with an average fieldmap template . Based on the estimated susceptibility distortion, an unwarped BOLD reference was calculated for a more accurate coregistration with the anatomical reference. The BOLD reference was then coregistered to the T1w reference using bbregister (FreeSurfer), which implements boundary-based registration . Coregistration was configured with six degrees of freedom. Head-motion parameters with respect to the BOLD reference (transformation matrices, and six corresponding rotation and translation parameters) were estimated using MCFLIRT (FSL 5.0.9, ). BOLD runs were slice-time corrected using 3dTshift from AFNI 20160207 ( , RRID:SCR_005927). The BOLD timeseries (including slice-timing correction) were resampled onto their original, native space by applying a single, composite transform to correct for head-motion and susceptibility distortions. These resampled BOLD timeseries will be referred to as preprocessed BOLD in original space, or just preprocessed BOLD. The preprocessed BOLD timeseries were then resampled into standard space, generating a preprocessed BOLD run in MNI152NLin2009cAsym space. A reference volume and its skull-stripped version were generated using a custom methodology of fMRIPrep. Following the processing and resampling steps, confounding framewise displacement (FD) timeseries were calculated based on the preprocessed BOLD for each functional run, using implementations in Nipype (following the definition by ). Many internal operations of fMRIPrep use Nilearn 0.5.2 ( , RRID:SCR_001362), mostly within the functional processing workflow. For more details of the pipeline, see the section corresponding to workflows in fMRIPrep’s documentation. Normalization and time-averaging of T2*‐weighted data In this set of analyses, BOLD signal was not used since we did not perform a timeseries analyses. Instead, we were interested in iron levels, which are a time-invariant property of brain tissue quantified from T2*‐weighted data . To quantify normalized T2*‐weighted (nT2*w) signal, first high motion timepoints were removed and subsequently excluded from analyses. High motion timepoints were defined as those that exceeded 0.3 mm FD . Then, to correct for scanner drift and potential differences between participants and MRI runs, each volume was normalized to the whole brain mean. The nT2*w signal from each voxel was then aggregated across all remaining volumes of the resting-state and task runs using the median, separately for each participant. The median was used to reduce the impact of outlier volumes . This resulted in a voxel-wise map of median nT2*w signal for each participant. As the presence of iron is inversely related to nT2*w signal, reduced nT2*w signal indicates increased brain tissue iron and therefore increased intrinsic DA availability. ROI selection Bilateral caudate, putamen, globus pallidus, accumbens, and thalamus were selected as regions of interest. These regions were selected for two reasons. First, they are dopamine rich, and iron is colocalized with dopamine in the brain . Second, reduced brain tissue iron has been observed in each of these regions in children with ADHD relative to age- and sex-matched TD peers . ROIs were defined using the Harvard-Oxford subcortical atlas . ROIs only included voxels with at least 50% probability of belonging to each specific brain region. NT2*w signal was averaged across all voxels in each ROI, resulting in a single value per ROI. We first analyzed nT2*w signal across a whole basal ganglia ROI and a thalamus ROI. The basal ganglia ROI combined Harvard-Oxford atlas masks of bilateral caudate, putamen, globus pallidus, and accumbens into a single basal ganglia ROI. Next, to assess the regional specificity of basal ganglia nT2*w signal and its relationships with performance, we extracted nT2*w signal from bilateral caudate, putamen, globus pallidus, and accumbens separately. Motion-related quality assurance Any participant with at least 170 volumes remaining across all fMRI runs after excluding high motion timepoints was included in the current study . One participant with ADHD was excluded as their fMRI data following placebo administration did not meet this criterion. To ensure that nT2*w signal was not significantly impacted by motion, we correlated mean FD across all runs with the nT2*w signal for each region of interest (ROI) using Pearson correlations ( α = 0.05). There were no significant relationships between mean FD and nT2*w signal in any of the ROIs we examined (all corrected p-values > 0.48; see , ). Analyses Before conducting each of the below analyses, power analyses were performed to determine statistical power to detect the expected effects. See , Expected Power for details. Group comparisons of demographic variables : To ensure that the ADHD and TD groups did not differ on demographic variables, including sex, race, family income, and parental education, we used two-sample chi-squared tests to compare groups on each of these variables. We additionally conducted Welch’s t-tests for unequal variance to compare groups on age, FSIQ, and Word Reading scores. Tests were FDR-corrected for seven comparisons at p < .05. Replication analysis – comparing nT2*w signal in children with ADHD and TD children : We first replicated previous work and examined whether there were group differences between children with ADHD and TD children in nT2*w signal in the whole basal ganglia and thalamus . We used separate linear regression models covarying for age and sex for each of the two ROIs, as follows: nT2*w signal ∼ group + age + sex Models were FDR-corrected for two comparisons at p < . 05 . We also determined whether there were group differences in nT2*w signal in specific basal ganglia subregions (i.e., caudate, putamen, globus pallidus, and accumbens). In this secondary analysis, we again used separate linear regression models covarying for age and sex for each of the four ROIs. Here, results were FDR-corrected for four comparisons at p < .05 . One participant with ADHD was not included in this analysis as their fMRI data following placebo administration had fewer than 170 volumes included after excluding high-motion timepoints (See Motion-related quality assurance ). This left 64 participants (n ADHD = 35, n TD = 29) in this analysis. Relationship between nT2*w signal and response inhibition : To determine the relationship between nT2*w signal and response inhibition in children with ADHD and TD children, we used linear regression models covarying for age and sex using the following equation: response inhibition performance ∼ nT2*w signal + age + sex For the primary analysis, we used separate linear regression models for each response inhibition performance measure (commission errors, tau) and ROI (whole basal ganglia and thalamus) for a total of four models (two response inhibition measures x two ROIs). Statistical tests were FDR-corrected for two comparisons separately for each response inhibition measure at p < .05, as we included two ROIs in the primary analysis . In a secondary analysis that assessed regional specificity of the relationship between nT2*w signal and response inhibition performance, we extracted nT2*w signal from each basal ganglia ROI separately. We used separate linear regression models for each response inhibition performance measure (commission errors, tau) and basal ganglia ROI (caudate, putamen, globus pallidus, and accumbens) for a total of eight models (two response inhibition measures x four ROIs). Here, statistical tests were FDR-corrected for four comparisons separately for each response inhibition measure at p < .05, as we included four basal ganglia ROIs . Given literature indicating that the relationship between brain tissue iron and response inhibition performance is strongest in individuals with high levels of brain tissue iron , and that individuals with ADHD have reduced brain tissue iron relative to TD individuals , we performed an additional analysis in which we examined whether the relationship between nT2*w signal and response inhibition performance differed as a function of diagnostic group. As such, we implemented linear regression models wherein response inhibition performance was predicted by nT2*w signal, diagnostic group, and the interaction between nT2*w signal and diagnostic group, covarying for age and sex as follows: response inhibition performance ∼ nT2*w signal + group + nT2*w signal*group + age + sex Separate linear regression models for each response inhibition measure and ROI were used, and groups of models were FDR-corrected separately at p < .05 as above (i.e., two corrections per response inhibition measure for whole basal ganglia and thalamus in the primary analysis; four corrections per response inhibition measure for caudate, putamen, globus pallidus, and accumbens in the secondary analysis). One additional participant was not considered for inclusion in this analysis due to inconsistent presentation of no-go stimuli (n = 1, TD). This left 63 participants (n ADHD = 35, n TD = 28) in this analysis. Relationship between nT2*w signal and responsivity to reward : To examine whether variability in nT2*w signal predicts responsivity to reward in children with ADHD and TD children, change in performance from the standard go/no-go task to the rewarded go/no-go task was calculated for all participants by subtracting performance measures on the rewarded go/no-go task from those on the standard go/no-go task, such that higher values (i.e., more positive) reflected greater improvements in task performance. Linear regression models covarying for age and sex were used to relate nT2*w signal to change in response inhibition performance separately for each response inhibition measure and ROI using the following equation: ∆ response inhibition performance ∼ nT2*w signal + age + sex As in the analyses above, separate linear regression models for each response inhibition measure and ROI were used. Again, groups of models were FDR-corrected separately at p < .05 . Specifically, for the primary analysis, two corrections per response inhibition measure (commission errors, tau) were made for the whole basal ganglia and thalamus. In the secondary analysis, four corrections per response inhibition measure were made for the basal ganglia subregions (i.e., caudate, putamen, globus pallidus, and accumbens). We also performed an additional analysis that examined whether the relationship between nT2*w signal and responsivity to reward differed as a function of diagnostic group. As such, we implemented linear regression models wherein change in response inhibition performance was predicted by nT2*w signal, diagnostic group, and the interaction between nT2*w signal and diagnostic group, covarying for age and sex as follows: ∆response inhibition performance ∼ nT2*w signal + group + nT2*w signal*group + age + sex Separate linear regression models covarying for age and sex for each response inhibition measure and ROI were used. Models were FDR-corrected in the same way as described in the previous analyses at p < .05 . That is, for each response inhibition measure (commission errors, tau) two corrections were made in the primary analysis that examined nT2*w signal in the basal ganglia and thalamus, and four corrections were made in the secondary analysis that examined nT2*w signal in the four basal ganglia subregions (i.e., caudate, putamen, globus pallidus, and accumbens). Two additional participants were not considered for inclusion in this analysis for missing rewarded go/no-go data (n = 1, TD) and incorrect button presses during the rewarded go/no-go task (n = 1, ADHD), leaving 61 participants (n ADHD = 34, n TD = 27) in this analysis. Relationship between nT2*w signal and responsivity to MPH : To investigate whether variability in nT2*w signal predicts responsivity to MPH in children with ADHD, change in standard go/no-go performance from placebo to MPH was calculated by subtracting performance measures on MPH from those on placebo. Here, higher values (i.e., more positive) indicate greater improvement of performance following MPH. Linear regression models covarying for age and sex were used to relate nT2*w signal to change in response inhibition performance separately for each response inhibition measure and ROI using the following equation: ∆ response inhibition performance ∼ nT2*w signal + age + sex As in the previous analyses, statistical tests were FDR-corrected separately for each response inhibition measure at p < .05 . In the primary analysis, statistical tests were corrected for two comparisons per response inhibition measure (commission errors, tau), as there were two ROIs (whole basal ganglia and thalamus). In the secondary analysis, statistical tests were corrected for four comparisons per response inhibition measure, as there were four basal ganglia subregion ROIs (caudate, putamen, globus pallidus, and accumbens). Three participants with ADHD were not considered for inclusion in this analysis for missing standard go/no-go data on placebo (n = 1) and on MPH (n = 2), leaving 33 participants in this analysis. For all linear regression models, standardized betas are reported in the Results section. Results 3.1 Group comparisons of demographic variables There were no significant differences between the ADHD and TD groups on age, FSIQ, word reading scores, sex, race, family income, or parental education (all corrected p-values > 0.44; ). 3.2 Replication analysis – comparing nT2*w signal in children with ADHD and TD children There were no significant group differences in nT2*w signal in the whole basal ganglia and thalamus ROIs or in the individual basal ganglia ROIs (i.e., caudate, putamen, globus pallidus, and accumbens) (all corrected p-values > 0.52; see , ). 3.3 Relationship between nT2*w signal and response inhibition First, we examined the relationship between nT2*w signal and response inhibition performance on the standard go/no-go task in all participants. There were no significant relationships between nT2*w signal in the whole basal ganglia or thalamus ROIs and the proportion of commission errors (both corrected p-values > 0.15; a-b ). When assessing the regional specificity of the relationship between nT2*w signal and the proportion of commission errors, lower nT2*w signal (i.e., higher brain tissue iron) in the putamen was significantly related to higher proportion of commission errors ( β = − 0.13 , corrected p-value = .04; c ). There were no significant relationships between nT2*w signal of the caudate, globus pallidus, or accumbens and the proportion of commission errors (all corrected p-values > 0.24; d , , ). When relating nT2*w signal to tau, no relationships were significant (all corrected p-values > 0.83). For parameter estimates and plots of relationships with tau and of additional ROIs, see , , . In additional analyses that examined group differences in the relationship between nT2*w signal in the whole basal ganglia and thalamus ROIs and the proportion of commission errors and tau, we did not observe significant interaction effects (both corrected p-values for proportion of commission errors > 0.67; both corrected p-values for tau > 0.17). We similarly did not observe significant interaction effects when examining each basal ganglia ROI separately (all corrected p-values for proportion of commission errors > 0.62; all corrected p-values for tau > 0.10). For parameter estimates, see , . 3.4 Relationship between nT2*w signal and responsivity to reward Next, we examined whether there was a relationship between nT2*w signal and responsivity to reward in all participants. We operationalized responsivity to reward as the change in performance (i.e., the proportion of commission errors or tau) between the standard and rewarded go/no-go task (standard go/no-go – rewarded go/no-go). There were no significant relationships between nT2*w signal in the basal ganglia or thalamus and responsivity to reward as measured by change in proportion of commission errors (both corrected p-values > 0.20; -b ). Further, there were no significant relationships between nT2*w signal in any of the subregions of the basal ganglia (i.e., caudate, putamen, globus pallidus, and accumbens) and change in proportion of commission errors (all corrected p-values > 0.25; -d , , ). When relating nT2*w signal to change in tau, no relationships were significant (all corrected p-values > 0.43). For parameter estimates and plots of relationships with tau and of additional ROIs, see , , . In additional analyses that examined group differences in the relationship between nT2*w signal in the whole basal ganglia and thalamus ROIs and the change in proportion of commission errors and tau, we did not observe significant interaction effects (both corrected p-values for proportion of commission errors > 0.07; both corrected p-values for tau > 0.96). We similarly did not observe significant interaction effects when examining each basal ganglia ROI separately (all corrected p-values for proportion of commission errors > 0.06; all corrected p-values for tau > 0.38). For parameter estimates, see , . 3.5 Relationship between nT2*w signal and responsivity to MPH We then examined whether there was a relationship between nT2*w signal and responsivity to MPH in children with ADHD. Responsivity to MPH was defined as a change in the proportion of commission errors or tau on the standard go/no-go task from placebo to MPH (placebo – drug). There was a significant relationship between nT2*w signal in the basal ganglia and change in the proportion of commission errors ( β = − 0.47 , corrected p-value = .01; a ). That is, lower basal ganglia nT2*w signal (i.e., higher brain tissue iron) was significantly related to greater improvements in the proportion of commission errors on MPH. There was not a significant relationship between nT2*w signal in the thalamus and change in proportion of commission errors (corrected p-value > 0.26; b ). In secondary analyses examining each basal ganglia subregion separately, there were significant relationships between nT2*w signal in the caudate and in the putamen and change in proportion of commission errors (caudate: β = −0.56, corrected p-value = .005; putamen: β = −0.47, corrected p-value = .01; c-d ). There were no significant relationships between nT2*w signal in the globus pallidus or accumbens and change in proportion of commission errors (both corrected p-values > 0.26; , ). When relating nT2*w signal to change in tau, no relationships were significant (all corrected p-values > 0.06 ) . For parameter estimates and plots of relationships with tau and of additional ROIs, see , , -8 . Group comparisons of demographic variables There were no significant differences between the ADHD and TD groups on age, FSIQ, word reading scores, sex, race, family income, or parental education (all corrected p-values > 0.44; ). Replication analysis – comparing nT2*w signal in children with ADHD and TD children There were no significant group differences in nT2*w signal in the whole basal ganglia and thalamus ROIs or in the individual basal ganglia ROIs (i.e., caudate, putamen, globus pallidus, and accumbens) (all corrected p-values > 0.52; see , ). Relationship between nT2*w signal and response inhibition First, we examined the relationship between nT2*w signal and response inhibition performance on the standard go/no-go task in all participants. There were no significant relationships between nT2*w signal in the whole basal ganglia or thalamus ROIs and the proportion of commission errors (both corrected p-values > 0.15; a-b ). When assessing the regional specificity of the relationship between nT2*w signal and the proportion of commission errors, lower nT2*w signal (i.e., higher brain tissue iron) in the putamen was significantly related to higher proportion of commission errors ( β = − 0.13 , corrected p-value = .04; c ). There were no significant relationships between nT2*w signal of the caudate, globus pallidus, or accumbens and the proportion of commission errors (all corrected p-values > 0.24; d , , ). When relating nT2*w signal to tau, no relationships were significant (all corrected p-values > 0.83). For parameter estimates and plots of relationships with tau and of additional ROIs, see , , . In additional analyses that examined group differences in the relationship between nT2*w signal in the whole basal ganglia and thalamus ROIs and the proportion of commission errors and tau, we did not observe significant interaction effects (both corrected p-values for proportion of commission errors > 0.67; both corrected p-values for tau > 0.17). We similarly did not observe significant interaction effects when examining each basal ganglia ROI separately (all corrected p-values for proportion of commission errors > 0.62; all corrected p-values for tau > 0.10). For parameter estimates, see , . Relationship between nT2*w signal and responsivity to reward Next, we examined whether there was a relationship between nT2*w signal and responsivity to reward in all participants. We operationalized responsivity to reward as the change in performance (i.e., the proportion of commission errors or tau) between the standard and rewarded go/no-go task (standard go/no-go – rewarded go/no-go). There were no significant relationships between nT2*w signal in the basal ganglia or thalamus and responsivity to reward as measured by change in proportion of commission errors (both corrected p-values > 0.20; -b ). Further, there were no significant relationships between nT2*w signal in any of the subregions of the basal ganglia (i.e., caudate, putamen, globus pallidus, and accumbens) and change in proportion of commission errors (all corrected p-values > 0.25; -d , , ). When relating nT2*w signal to change in tau, no relationships were significant (all corrected p-values > 0.43). For parameter estimates and plots of relationships with tau and of additional ROIs, see , , . In additional analyses that examined group differences in the relationship between nT2*w signal in the whole basal ganglia and thalamus ROIs and the change in proportion of commission errors and tau, we did not observe significant interaction effects (both corrected p-values for proportion of commission errors > 0.07; both corrected p-values for tau > 0.96). We similarly did not observe significant interaction effects when examining each basal ganglia ROI separately (all corrected p-values for proportion of commission errors > 0.06; all corrected p-values for tau > 0.38). For parameter estimates, see , . Relationship between nT2*w signal and responsivity to MPH We then examined whether there was a relationship between nT2*w signal and responsivity to MPH in children with ADHD. Responsivity to MPH was defined as a change in the proportion of commission errors or tau on the standard go/no-go task from placebo to MPH (placebo – drug). There was a significant relationship between nT2*w signal in the basal ganglia and change in the proportion of commission errors ( β = − 0.47 , corrected p-value = .01; a ). That is, lower basal ganglia nT2*w signal (i.e., higher brain tissue iron) was significantly related to greater improvements in the proportion of commission errors on MPH. There was not a significant relationship between nT2*w signal in the thalamus and change in proportion of commission errors (corrected p-value > 0.26; b ). In secondary analyses examining each basal ganglia subregion separately, there were significant relationships between nT2*w signal in the caudate and in the putamen and change in proportion of commission errors (caudate: β = −0.56, corrected p-value = .005; putamen: β = −0.47, corrected p-value = .01; c-d ). There were no significant relationships between nT2*w signal in the globus pallidus or accumbens and change in proportion of commission errors (both corrected p-values > 0.26; , ). When relating nT2*w signal to change in tau, no relationships were significant (all corrected p-values > 0.06 ) . For parameter estimates and plots of relationships with tau and of additional ROIs, see , , -8 . Discussion The main goal of this study was to examine how brain tissue iron levels in the basal ganglia and thalamus related to the cognitive effects of dopaminergic modulation in children with ADHD and TD children. While we did not find significant group differences in basal ganglia or thalamic brain tissue iron, we did observe that tissue iron levels in the putamen related to proportion of commission errors on the standard go/no-go task. Critically, tissue iron levels in the whole basal ganglia, and specifically in the putamen and caudate, were significantly related to improvements in proportion of commission errors on the standard go/no-go task following MPH administration in children with ADHD. First, in both registered and unregistered supplementary validation analyses, we confirmed that brain tissue iron measurements are stable over a weeks-long period in TD children and following a one-time MPH challenge in children with ADHD (see ). Prior work has demonstrated that brain tissue iron measurements are stable over a months-long period in children and minutes- and days-long periods in adults . We are the first to show that brain tissue iron measurements in children are indeed stable when assessed approximately one week apart. Additionally, previous work has shown that brain tissue iron levels normalize as a function of chronic psychostimulant treatment . We have now confirmed that there is no change in brain tissue iron following a single administration of MPH. As such, our work represents a crucial contribution to the literature. We did not find significant differences in brain tissue iron in the basal ganglia or thalamus between children with ADHD and TD children. Existing literature is inconsistent in this regard, likely due to the methods used to quantify brain tissue iron. For example, Adisetiyo and colleagues leveraged both MRI relaxation rates, as implemented here, and magnetic field correlation (MFC). They only observed group differences using MFC-derived measures of brain tissue iron. Further, the ages of our study participants (i.e., 8–12 y) cover a narrower range than other studies, whose ages cover at a minimum 8–14 years . Adolescence is a period of significant change within cortical and subcortical dopamine systems , and these changes can be observed using T2*‐weighted imaging . It is therefore possible that differences in brain tissue iron levels between children with ADHD and TD children may not emerge until adolescence. In addition to examining whether there were group differences in brain tissue iron, we examined relationships between brain tissue iron and response inhibition performance. Previous research examining the relationships between brain tissue iron and cognition have shown that greater levels of brain tissue iron were related to faster processing speed and higher general intelligence , as well as greater verbal reasoning, nonverbal reasoning, and spatial processing . Though previous work has not focused specifically on response inhibition, we hypothesized that we would similarly find that greater levels of brain tissue iron would be related to better cognitive performance in our study. Instead, we found that higher brain tissue iron in the putamen was related to more commission errors (i.e., worse response inhibition performance) on the standard go/no-go task. Though this significant relationship was specific to the putamen, the general direction of these relationships was largely consistent across brain regions examined. This finding is consistent with literature that has examined the interplay between the dopamine system and response inhibition using other indices of dopamine functioning . For example, reduced D2/D3 receptor availability in the caudate and putamen and increased spontaneous eye-blink rate, both of which indicate increased extracellular dopamine levels, have been related to poorer response inhibition performance on the stop-signal task (i.e., increased stop-signal reaction times) . The present study is therefore in line with previous work, as our results suggest that increased dopamine levels as indexed by increased putamen tissue iron are related to more commission errors (i.e., poorer performance) on the standard go/no-go task. Our findings and others’ are also in line with models of striatal behavioral control that characterize ‘go’ (direct) and ‘no-go’ (indirect) pathways of the basal ganglia . Specifically, increased dopamine levels are hypothesized to bias the balance toward the ‘go’ pathway and suppress the ‘no-go’ pathway , which would result in increased commission errors, as we observed in our study. In children with ADHD, we found that higher basal ganglia tissue iron levels were associated with greater responsivity to MPH, as indexed by a greater reduction in commission errors on the standard go/no-go task. When focusing on specific basal ganglia subregions, this result was driven by significant relationships in the caudate and putamen. The caudate and putamen are key regions in the cortical-basal ganglia loops associated with cognitive control broadly and response inhibition specifically . Dopaminergic activity in these regions has also been related to the ADHD phenotype (i.e., inattentive symptoms) and to performance improvements following reward in healthy individuals . The observations that participants with higher brain tissue iron levels exhibited more commission errors, as well as that children with ADHD with higher brain tissue iron levels exhibited the greatest reduction in commission errors following MPH, suggest that medication effects on response inhibition might be related to response inhibition performance at baseline (i.e., on placebo). Recent work in children with ADHD shows that inhibitory control improvements following MPH administration were greatest in those with the poorest baseline inhibitory control . We confirmed this was the case in our data via an unregistered exploratory analysis in which we conducted a linear regression model predicting change in commission errors (placebo – drug) from baseline commission errors (placebo), controlling for age and sex. We found that children with ADHD with the most commission errors on the standard go/no-go task on placebo improved the most following MPH ( β =.95, p = .01). Notably, these findings are consistent with prior literature observing that individuals with higher intrinsic DA are more responsive to MPH , including reduction of symptoms and improvement of cognitive functioning . Even so, it has been shown that the relationships between intrinsic DA and cognition improvements following MPH administration depend on the specific domains of cognitions examined . As such, additional investigations of the precise dopaminergic mechanisms through which MPH improves response inhibition across individuals with varying levels of intrinsic, baseline DA are needed. The administration of MPH and the receipt of rewards are both known to improve cognition via their impact on the dopamine system . While recent work has shown that individuals with high levels of brain tissue iron display greater responsivity to dopaminergic modulation via the receipt of rewards , we did not observe this in our data. Crucially, however, we did observe that the relationships between brain tissue iron and the reduction in commission errors from the standard to rewarded go/no-go task were in the same direction as those observed in analyses that examined responsivity to MPH. Thus, this is generally consistent with prior work suggesting that the receipt of rewards and MPH modulate dopamine similarly by increasing its synaptic availability in the striatum . Notably, prior literature has reported greater striatal activation in the presence of both MPH and rewards relative to the presence of reward alone and that MPH administration results in greater improvements in cognitive performance relative to reward-related reinforcement , corresponding to a greater effect size (d = 1.54 for MPH and d = 0.60 for rewards . Thus, a larger sample may have been needed to detect the smaller effect of rewards on response inhibition. This highlights the need to replicate these results in a larger sample of children with and without ADHD. We did not observe significant relationships between brain tissue iron and response time variability as indexed by tau on the standard go/no-go task, nor between brain tissue iron and responsivity to reward or MPH. Response time variability is thought to index a range of cognitive processes, including attention and working memory . Larsen and colleagues did not find relationships between brain tissue iron and the ‘executive control’ cognitive domain of the Penn computerized neurocognitive battery, which is defined as abstraction and mental flexibility, attention, and working memory . Our results are therefore consistent with this work and suggest that the relationships between brain tissue iron and response inhibition performance might be specific to the ability to withhold responding (i.e., stopping), which is better captured via commission error quantification. Though our findings contribute to the growing body of evidence that brain tissue iron neurophysiology is linked to cognition in children with ADHD and TD children, there are certain limitations that must be acknowledged. First, it is important to recognize that iron is not a direct measure of all aspects of dopamine function but is most associated with presynaptic dopamine availability . Larsen and colleagues showed that brain tissue iron measurements derived from tissue relaxation rates were significantly associated with PET-derived presynaptic vesicular dopamine storage although not in a 1:1 manner. Iron is important in several biological processes that are not limited to the dopaminergic system, including myelination and production of other catecholamines . While the basal ganglia is unique in its predominance of DA, it is not a direct measure and results should be interpreted with this limitation in mind. Regardless, leveraging tissue relaxation to quantify dopamine indirectly in children with ADHD and TD children is a promising avenue of research, given the radiation exposure associated with PET imaging and subsequent challenges of assessing dopamine levels in vivo in children. Additionally, we did not collect daily serum iron data from participants in the present study. Given the dynamic nature of peripheral iron levels , the relationship between serum iron and brain tissue iron is difficult to quantify. Even so, it has been shown that serum iron levels do not differ between individuals with and without ADHD , so it is unlikely that our results are driven by differences in serum iron level. However, future studies should investigate whether serum iron level relates to cognitive performance and responsivity to dopaminergic manipulation as we have here with brain tissue iron. We also did not collect sleep data from our participants. We therefore cannot determine whether variability in response inhibition performance in this sample is due to variability in sleep duration or quality. To combat the possibility of sleep differences impacting our results to the best of our ability, we excluded individual runs of the standard and rewarded go/no-go tasks based on omission error rates to ensure that subjects were awake and responding to the task, as described in Go/no-go tasks and measures. Finally, MPH is an indirect dopamine and norepinephrine agonist . We were therefore not able to determine whether modulation of the norepinephrine system impacted improvements in response inhibition following the administration of MPH. Additional investigations into the precise neural mechanisms through which MPH improves cognition are needed to answer this question. In conclusion, while we did not observe significant differences in basal ganglia or thalamic tissue iron in children with ADHD and TD children aged 8–12 y, we did validate the assumption that tissue iron is stable across a weeks-long period in TD children and following a one-time MPH challenge in children with ADHD. We additionally demonstrated that increased tissue iron in the putamen was significantly related to increased commission errors on the standard go/no-go task, and that increased tissue iron in the caudate and putamen, as well as generally in the whole basal ganglia, was significantly related to improvements in the proportion of commission errors following MPH administration. These relationships were not observed when response inhibition performance was indexed using tau (i.e., response time variability). Together, these findings augment existing literature that examines brain tissue iron and its relationships with cognition in children with ADHD and TD children and is one of the first to clarify the role of brain tissue iron in the cognitive effects of dopaminergic modulation. This work is a crucial step toward understanding the mechanisms of both behavioral and medication treatment for ADHD. Further, the present findings suggest that noninvasive brain tissue iron measurements may represent a biomarker for response to dopaminergic treatment in ADHD. The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. |
The Efficacy of Various Orthodontic Appliances in the Treatment of Obstructive Sleep Apnea | 70850364-9892-44f6-9ceb-a780a7dd7e9b | 11831541 | Surgical Procedures, Operative[mh] | Obstructive sleep apnea (OSA) is a common, chronic disorder characterized by successive episodes of upper airway collapse with an increase in the airflow resistance, which leads to a decrease (hypopnea) or complete cessation of airflow (apnea) during sleep. The prevalence of the disorder in the general population varies from 3 to 7% in adult males, and 2% to 5% in adult females . Breathing cessation causes acute adverse effects, such as desaturation of oxyhemoglobin, vomiting, high blood pressure and heart rate, increased sympathetic activity, sleep fragmentation, etc. . Risk factors for the development of obstructive sleep apnea primarily include: older age, male gender, obesity, and craniofacial anomalies, as well as anomalies of the upper respiratory pathways. The prevalence of sleep related problems, including obstructive sleep apnea, increases with age. The prevalence increases steadily until the age of 60, after which it reaches a plateau. Possible reasons for the increase in the prevalence of OSA during aging are structural changes in the parapharyngeal area, such as increased deposition of fatty tissue and lengthening of the soft palate . Treatment of patients with obstructive sleep apnea requires a multidisciplinary approach. Therapeutic options include continuous positive pressure therapy (CPAP), followed by weight loss, surgical interventions to the upper respiratory pathways, and intraoral orthodontic devices . Intraoral devices, as a therapeutic option for OSA, are recommended for the treatment of mild and moderate OSA, as well as severe OSA in patients who do not tolerate CPAP therapy, or when CPAP therapy has proven to be unsuccessful . Intraoral devices can be divided into three groups: tongue retainers (TRD); soft palate lifters (SPL) and devices for mandibular protrusion − a mandibular advancement device (MAD). SPL devices have been completely abandoned for use today, while the remaining two groups of devices are still in use. TRD device design constitutes an extraoral flexible protruding part that leads to gentle suction of the tongue under pressure, pulling the tongue forward and subsequently opening the airway during sleep . The most commonly used intraoral devices are mandibular advancement devices (MAD). MADs consist of splints which are placed on the upper and lower teeth, with the aim of protruding the mandible and keeping it in a protruded position . This leads to the expansion of the upper airways, by lateral movement of parapharyngeal fatty deposits, as well as the forward positioning of the base of the tongue. Additionally, there are also changes in muscle activity, with the focus on relaxation of the genioglossus muscle, and activation of the masseter and submental muscles. By their action, MAD devices reduce the collapsibility of the upper respiratory pathways, resulting in a reduction in apnea episodes during sleep . Current research on the effectiveness of different oral devices for the treatment of OSA has conflicting opinions . The goal of this review was to determine the effectiveness of different types of monobloc and bibloc MAD devices in the treatment of all forms of OSA, by reviewing the available literature. Information Sources For the purpose of this review, a systematic literature search was performed in PubMed, ResearchGate, NCBI and Google Scholar databases. The search was conducted using MeSh search strategies and using combined texts: obstructive sleep apnea and oral appliance, monobloc oral appliance, bibloc oral appliance, mandibular advancement device, fixed mandibular advancement device, custom-made mandibular advancement device, monobloc mandibular advancement device, and bibloc mandibular advancement devices. The search included articles in English, published in the inclusive time period from 2000 to 2024. Selection Process The literature review included two steps. In the first step, a literature search was performed with an overview of the available abstracts. The second step included collection of the full text of all studies that fully met the inclusion criteria. Ultimately, this review paper included a total of 13 studies directly comparing the impact of both monobloc and bibloc types of devices. Eligibility Criteria The studies include randomized controlled studies, nonrandomized prospective studies, clinical studies with organized data collection, and cohort studies. The inclusion criteria were: studies that evaluated the performance of two or more types of devices that had to be classified as monobloc or bibloc type; a definitive diagnosis of OSA established on the basis of polysomnography studies with an apnea-hypopnea index (AHI) value greater than five; the outcome of therapy with a MAD device assessed on the basis of a controlled polysomnographic study, and the ESS score (Epworth scale drowsiness) or SAQL score (Sleep Apnea Quality of Life). Exclusion criteria were non-English articles, case reports and review articles, different diagnostic criteria for OSA, and articles with insufficient data for analysis. Presentation of Data The recorded data include: the name of the authors and date of the publication of the research; study design, device design, demographic data; BMI values; the number of patients in the study; mandibular protrusion value and vertical dimensions; the degree of OSA; success of the therapy; unwanted effects of the device; acceptance of therapy; and the economic profitability of the type of device. The success criterion is defined by AASDM (American Academy of Dental Sleep Medicine) as a reduction in the AHI value by 50% from the basal level, or a reduction in the degree of OSA. For the purpose of this review, a systematic literature search was performed in PubMed, ResearchGate, NCBI and Google Scholar databases. The search was conducted using MeSh search strategies and using combined texts: obstructive sleep apnea and oral appliance, monobloc oral appliance, bibloc oral appliance, mandibular advancement device, fixed mandibular advancement device, custom-made mandibular advancement device, monobloc mandibular advancement device, and bibloc mandibular advancement devices. The search included articles in English, published in the inclusive time period from 2000 to 2024. The literature review included two steps. In the first step, a literature search was performed with an overview of the available abstracts. The second step included collection of the full text of all studies that fully met the inclusion criteria. Ultimately, this review paper included a total of 13 studies directly comparing the impact of both monobloc and bibloc types of devices. The studies include randomized controlled studies, nonrandomized prospective studies, clinical studies with organized data collection, and cohort studies. The inclusion criteria were: studies that evaluated the performance of two or more types of devices that had to be classified as monobloc or bibloc type; a definitive diagnosis of OSA established on the basis of polysomnography studies with an apnea-hypopnea index (AHI) value greater than five; the outcome of therapy with a MAD device assessed on the basis of a controlled polysomnographic study, and the ESS score (Epworth scale drowsiness) or SAQL score (Sleep Apnea Quality of Life). Exclusion criteria were non-English articles, case reports and review articles, different diagnostic criteria for OSA, and articles with insufficient data for analysis. The recorded data include: the name of the authors and date of the publication of the research; study design, device design, demographic data; BMI values; the number of patients in the study; mandibular protrusion value and vertical dimensions; the degree of OSA; success of the therapy; unwanted effects of the device; acceptance of therapy; and the economic profitability of the type of device. The success criterion is defined by AASDM (American Academy of Dental Sleep Medicine) as a reduction in the AHI value by 50% from the basal level, or a reduction in the degree of OSA. A total of 13 studies were analyzed that directly compared the effectiveness of monobloc and bibloc devices. The studies were published in the period from 2000 to 2022, and included crossover and parallel randomized controlled trials, as well as cross and parallel cohort studies. Out of the 13 studies, four were classified as RCT parallel studies, six were RCT crossover studies, two cohort parallel studies, and one was a cohort crossover study. The duration of the studies was variable, ranging from four weeks to one year, with six studies having a so-called “washout period” between the use of monobloc and bibloc MAD devices. That period implies a time period during which the subject does not use any type of MAD device, and it was used in the studies where one group of subjects used both types of devices . Four studies showed the equal effectiveness of both types of MAD devices by measuring the basal and control values of the AHI index . Six studies reported the greater efficacy of monobloc MAD devices . Three studies showed the better efficacy of the bibloc MAD device . The success of the treatment on the basis of the AHI index,, differs between these studies. In 10 studies, the complete success of the treatment is defined as a value of AHI <5 after MAD. Therapy, or a reduction in the AHI value by 50% after MAD therapy. The results of therapy success in relation to the AHI index also differ. In 2017 and 2019, Isacsson et al. achieved equal success in both groups. A positive response to therapy, defined as a reduction in the AHI value to less than 10 events per hour, was achieved in 61% of subjects in the monobloc group, and 56% of subjects in the bibloc group . In the 2019 study, it is said that both monobloc and bibloc MAD devices led to a decrease in AHI values by 12 to 14 apneic events per hour . A significant improvement was recorded in the AHI index in both groups of devices by Yamamoto et al., with complete success of the therapy in almost half of the subjects in both groups . The Al-Dharrab study showed the same result, where both types of devices showed a reduction greater than 50% in mean AHI, which coincides with the definition of treatment success . This study has a limitation because the sample size was relatively too small to highlight any difference between the two appliances. Five studies included in this review demonstrated the superiority of monobloc devices in lowering the AHI value . The greater success of the monobloc devices compared to the bibloc devices was noted by Bloch et al. The definition of successful treatment in this study was a reduction in AHI values below 10 events per hour, which was achieved in 18 subjects with a monobloc device (75%), and 16 subjects with a bibloc device (67%) out of the total number of 24 subjects. Although both types of device led to a decrease in the value of the AHI index, the monobloc device resulted in statistically more significant reduction values . Clinical application of the results revealed reduced snoring and certain aspects of impairment in daily activities were more pronounced with the monobloc than with the bibloc device. In addition, there was a trend toward greater improvement in several objective variables of breathing and sleep disturbance with the monobloc device. La Mantia I, Umemoto et al. and Hyun Lee et al. also demonstrated the greater success of monobloc devices in reducing the value of the AHI . In the La Mantia study, both MADs showed efficacy in improving objective parameters compared to the baseline, with a significant difference in favor of the monobloc in terms of improving AHI . The monobloc group had 14 subjects with a complete response to therapy, i.e. the complete success of therapy, while complete success of therapy was noted in only five subjects in the bibloc group . In the study by Lee WH et al. therapy success, defined as a reduction in AHI values by 50%, was noted in 77.4% of subjects in the monobloc group and 58.3% in the bibloc group . Greater success in reducing AHI values in the monobloc group was noted by Geoghegan et al. , while Zhou et al. reported an absolute decrease in AHI to less than 10 events per hour, in 68.0% of subjects in the monobloc group, compared to 56.3% in the bibloc group . The greater success of the bibloc type of device was demonstrated in three studies included in this review paper . Sari et al. demonstrated the better success of the Clearway bibloc device in lowering AHI index values on follow-up PGS analyses. The follow-up was carried out after 7 days and after one month from the start of using the device, where the second follow-up analysis showed a more significant decrease in the value of the AHI index . All patients subjectively reported more restful sleep with a reduction in snoring. In addition, minimum oxygen saturation increased at the end of the first week, and also increased above 90% oxygen saturation at the end of the first month in both groups. At follow-up examinations after one year of using the MAD device, Tegelberg et al. reported the greater success of the Narval bibloc device compared to the monobloc device. Although a significant decrease in the value of the AHI index was recorded in the bibloc group, successful therapy (AHI<10) was recorded in 68% of subjects in the bibloc group and 65% of respondents in the monobloc group . Lettieri et al. reported a greater reduction in obstructive events in the bibloc group. In the bibloc group, the AHI value decreased by 74.4%, and in the monobloc group that value was 64.9%. Complete success of therapy, defined as AHI value reduction to less than 5 events per hour, was achieved in 57.2% of subjects in the bibloc group, or 46.9% in the monobloc group . According to these data, it has been demonstrated that both types of MAD devices lead to a reduction in the AHI index values, and thus to the success of OSA therapy . A large number of studies point to the greater success of monobloc devices in lowering the AHI index, however, the fact that these are short-term studies should be taken into account. Assessment of the efficacy of monobloc and bibloc device therapies is also based on the severity of obstructive sleep apnea (OSA). Out of the 12 studies analyzed, six studies evaluated the impact of both types of devices on the treatment of mild and moderate OSA . All the studies resulted in the conclusion that both monobloc and bibloc devices lead to a reduction in AHI values, i.e. a reduction in AHI values by 50% in both mild and moderate OSA. Isacsson et al., recorded more successful results of both types of devices in the treatment of moderate OSA . The effectiveness of both monobloc and bibloc devices in the treatment of severe OSA was assessed in three of the analyzed studies. Research by Lee WH et al. showed the higher success rate of monobloc devices in the treatment of severe OSA, with a value of 86%, while the bibloc device recorded a success rate of 69.7% . A limitation of this study is the relatively short follow-up duration for evaluating compliance. Despite these limitations, the study may be meaningful in that it compared efficacy and compliance between mono-bloc and bi-bloc devices in the same patient population. Lettieri et al., however, did not record the greater success of monobloc devices in the treatment of severe OSA. On the contrary, most subjects with severe OSA did not respond to monobloc device therapy, compared to a bibloc device . The study by Tegelberg et al., reported that both types of devices resulted in a significant reduction in values in the group of subjects with severe forms of OSA (AHI>30), with the slightly higher efficacy of the bibloc device . Preferably, the final selection of appliances should be made by dental specialists, in accordance with and adjusted to the patient, thereby introducing personalized medicine in MAD management. Cost aspects, such as appliance price and the number of return visits, are secondary factors which differ with every appliance design and per patient. Recommendations for the optimal MAD design and phenotyping of OSA patients are difficult to draw and insufficiently supported by the current literature . Study Limitations It is important to acknowledge certain limitations within this review. First, the studies analyzed in this review are predominantly short-term in nature, with most having small sample sizes. Additionally, a significant portion of the studies primarily include male subjects, which may not fully represent the population affected by OSA. Given the chronic nature of OSA, necessitating lifelong therapy, there is a critical need for longer-term studies to explore the sustained effectiveness of these devices. Furthermore, due to anatomical differences in the airway between male and female populations, studies are needed that directly compare the efficacy of specific device types in both groups, as well as using larger sample sizes to enhance the robustness of the findings. It is important to acknowledge certain limitations within this review. First, the studies analyzed in this review are predominantly short-term in nature, with most having small sample sizes. Additionally, a significant portion of the studies primarily include male subjects, which may not fully represent the population affected by OSA. Given the chronic nature of OSA, necessitating lifelong therapy, there is a critical need for longer-term studies to explore the sustained effectiveness of these devices. Furthermore, due to anatomical differences in the airway between male and female populations, studies are needed that directly compare the efficacy of specific device types in both groups, as well as using larger sample sizes to enhance the robustness of the findings. From the findings derived from the study’s analysis regarding the efficacy of MAD devices in reducing AHI values, it can be inferred that both monobloc and bibloc devices demonstrate comparable success rates in the management of mild to moderate OSA. Nevertheless, in cases of severe OSA, the bibloc device demonstrated superior efficacy. Consequently, the initial treatment preference for mild to moderate OSA may lean towards a monobloc device, while consideration of a bibloc device may arise if the monobloc device yields unsatisfactory outcomes, or is not well-tolerated by the patient. An alternative type of MAD device may be considered as a subsequent option in the event of an insufficient response to initial MAD therapy, before consideration of CPAP therapy referral for the patient. Authors’ Contributions: Conception and design: AG; Acquisition, analysis and interpretation of data: AJ; Drafting the article: VDZ; Revising it critically for important intellectual content: AT; Approved final version of the manuscript: LRV. Conception and design: AG; Acquisition, analysis and interpretation of data: AJ; Drafting the article: VDZ; Revising it critically for important intellectual content: AT; Approved final version of the manuscript: LRV. |
Multi-omics analysis reveals the interplay between intratumoral bacteria and glioma | 34763507-c87f-4e41-a187-6ac1bfc8d5fe | 11748541 | Biochemistry[mh] | Glioma, as the predominant primary malignancy of the central nervous system , poses a formidable challenge for treatment worldwide owing to its high recurrence, mortality, and poor prognosis. It is imperative to further elucidate the pathogenic mechanisms of glioma and develop novel therapeutic strategies. Increasing evidence has underscored the significant contributions made by intratumoral microbiota in tumor progression , metastasis , and treatment . Therefore, elucidating the molecular characteristics of glioma from the perspective of the intratumoral microbiome is of great significance for understanding the etiology of glioma and developing new bacteria-based therapeutic strategies. Whether microbiota exists in glioma, or even in the brain remains issues under debate. The brain has traditionally been regarded as a sterile organ due to the blood-brain barrier. However, recent evidence indicates that microbiota may directly inhabit the brain under non-inflammatory and non-traumatic conditions . Gram-negative bacterial molecules and Porphyromonas gingivalis were detected in the brains of Alzheimer’s disease patients and found to be associated with tau protein accumulation . Rombert et al. reported in an abstract at the American Neuroscience Annual Meeting that they found rod-shaped bacteria in the healthy human post-mortem brains under electron microscopy. In addition, a study conducted a comprehensive detection and analysis of the intratumoral microbiota of seven types of solid tumors, including glioblastoma . They found that glioblastoma harbored a unique microbial community and that the level of bacterial DNA was not low. Due to the small sample size and the limitations of the experimental design, this study did not reveal the role of intratumoral microbiota in glioblastoma. In our previous work , we used intact tissues and combined tissue clearing, immunofluorescence staining methods to observe bacterial lipopolysaccharides (LPS) in glioma in three-dimensional space. Although we further confirmed the presence of bacterial LPS in human glioma tissue morphologically, more evidence is needed to elucidate the spatial relationship and function of bacteria in glioma. The study of the tumor microbiome is a challenging task due to the low intratumor bacterial biomass and uncultivability of some microbial species. Novel high-resolution techniques are urgently required to facilitate further research. Multi-omics techniques can elucidate tumor microbiota at different levels, thereby facilitating a comprehensive understanding of the intricate biological processes associated with tumor microbiota. To this end, we simultaneously performed 16S rRNA sequencing, metabolomics and transcriptomics analyses on glioma tissue (G) samples, and adjacent normal brain tissue (NAT) samples. Our results suggested that intratumoral microbiota of glioma may affect the expression of neuron-related genes through bacteria-associated metabolites. Human subjects and sample collection We obtained 110 fresh frozen tissues, 8 paraffin tissues, and 10 stool samples from patients with pathologically confirmed glioma. The clinical information of the patients was shown in Table S1. The stool samples were taken before cancer treatment, and individuals who received preoperative radiation or chemotherapy treatment or had a previous history of glioma were excluded. All samples were collected from Zhujiang Hospital of Southern Medical University (Guangzhou, China). Tissue samples were meticulously collected in sterile cryotubes and promptly transported using a portable liquid nitrogen tank to the Clinical Biobank Center, where they were kept at −196°C for long-term storage until DNA extraction. Fecal samples are rapidly transferred to a −80°C freezer for storage until further use. Mice Athymic BALB/c nude mice (4 weeks; 20–25 g) were purchased from GemPharmatech Corporation (Nanjing, China) and maintained in a specific pathogen-free environment under a 12 h light-dark cycle with free access to food and water. Cell lines and culture Human glioblastoma lines (U87, U251, and Ln229) were obtained from the Chinese Academy of Sciences. The cells were cultured in Dulbecco’s Modified Eagle Medium (DMEM; Gibco, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (Gibco). All cell lines were at 37°C and 5% CO2 in a humid environment. Bacterial strains Fusobacterium nucleatum ATCC 25586 (Fn) was gifted kindly by Dr. Songhe Guo. Fn was grown anaerobically at 37°C for 48–72 h in blood agar plate (Oxoid, UK). Fn were centrifuged and then suspended to a concentration of 1 × 10 6 colony-forming units (CFUs)/mL with PBS for subsequent experiments. Isolation and culture of human-derived organoids We performed primary isolation of organoids using fresh tumor samples obtained from glioblastoma patients post-surgery. Fresh glioblastoma tissues were rinsed twice with a washing solution and then minced. The tissues were digested with a digestive solution at 37°C. After digestion, the sample was centrifuged, the supernatant was discarded, and the cells were resuspended in Hanks' balanced salt solution (HBSS) and passed through a 100 µm cell strainer. The resulting cell filtrate was centrifuged again, and the supernatant was discarded. The cell pellet was treated with erythrocyte lysis buffer for 10 min. After centrifugation and removal of the supernatant, the cells were resuspended in a medium containing growth factors and matrix gel and seeded into low-adhesion 6-well plates (Corning). Finally, the plates were placed in a 37°C incubator on a shaker for culture. Cell counting kit-8 (CCK-8) assays Glioma cells were seeded in 96-well plates at 2,000 cells per well in the growth medium. Cells were untreated or with Fn at a multiplicity of infection (MOI) of 50:1. At 0 h (when the cells adhered), and at 24, 48, 72, and 96 h, 10 µL of CCK-8 reagent (APEXbio) was added to each well, followed by gentle mixing. The plates were then incubated in the dark for 2 h at 37°C. The absorbance at 450 nm was measured using a full-wavelength microplate reader (THERMO Scientific, Multiskan Sky). Colony formation assays Glioma cells were seeded in 6-well plates at 800 cells per well. Cells were untreated or with Fn at a multiplicity of infection (MOI) of 50:1 and cultured for 10 days. Subsequently, the culture medium was removed, and the cell colonies were fixed with 4% paraformaldehyde (Biosharp) for 10 min at 25°C, followed by staining with 0.1% crystal violet (BKMAMLAB) for 20 min. Images were taken, and the colonies were counted using ImageJ software. Measurement of organoid diameter After resuscitating the organoids, the cryopreservative solution was removed by centrifugation, and the cell pellet was resuspended in antibiotic-free organoid medium. Glioma organoids were inoculated into 24-well plates at 40,000 cells/well, either untreated or incubated with bacteria at an MOI of 50:1. After 2 days of culture and formation, each organoid was photographed using a Nikon Ts2FL inverted microscope at 4× magnification for each well. Subsequent images were taken every 24 h. The diameters of the organoids were then measured from the photographs using ImageJ. Organoid embedding, sectioning, HE staining, and immunofluorescence On the 5th day of co-culture, after imaging, organoids were collected and fixed with 1 mL of 4% paraformaldehyde for 1 h. They were then transferred to 1.5 mL centrifuge tubes, and 20–50 µL of molten agarose was added to the organoid pellet and placed on ice for 30 min to solidify. The organoids were dehydrated according to the tissue processor protocol and embedded in paraffin the next day. Paraffin blocks were sectioned continuously into 8 µm sections using a microtome. The prepared organoid sections were collected, and HE staining and immunofluorescence were performed by Huayin Medical Laboratory Center. Sections of organoids were examined using a Nikon Eclipse Ti2-E inverted fluorescence microscope at 20× magnification, and images were captured in the DAPI, FITC, and TEXRED channels, respectively. ATP assay for organoid viability Passaged glioblastoma organoids were seeded into low-adhesion 96-well plates (Corning) at 10,000 cells/well. The organoids were either left untreated or incubated with bacteria at an MOI of 50:1 and cultured on a shaker in a 37°C incubator. Two blank control groups without organoids were also set up. At 0, 24, 48, 72, and 96 h, the ATP levels were measured using an ATP assay kit (abs50059, Absin) according to the manufacturer’s instructions. ELISA detection A 50 µg tumor tissue sample was homogenized using a tissue grinder (LUKA, LUKYM II) and then centrifuged to collect the supernatant. For U87 cells and organoids, they were sonicated at a specific power setting and subjected to repeated freeze-thaw cycles at −20°C and room temperature to obtain the supernatant. According to the manufacturer’s protocol (Jiangsu Jingmei Biological Technology Co., Ltd.), 10 µL of each sample and 40 µL of sample diluent were added to a 96-well plate for ELISA analysis. Finally, the absorbance at 450 nm was measured for each well. Multiplex immunofluorescent assay Four-micromolar paraffin sections were deparaffinized in xylene and rehydrated in a series of graded alcohols. Antigen extraction was performed in citrate buffer (pH 6), boiled at high power in a microwave oven for 20 s, maintained at low power for 5 min at low boiling state, and cooled naturally after turning off the heat. Multiplex fluorescence labeling was performed using Tyramide signal amplification-dendron-fluorophores with NEON 7-color Allround Discovery Kit for FFPE (Histova Biotechnology). Multiplex antibody panels applied in this study are CD68 (abcam#213363, 1:200), GFAP (abcam#68428, 1:200), LPS (Lipopolysaccharide Core, HycultBiotech#HM6011, 1:200), DAPI (abcam#ab104139). After detecting all antibodies, images were taken using TissueFAXS imaging software (v7.134) and viewed with TissueFAXS Viewer software, and fluorescent positive cells were counted using StrataQuest tissue flow cytometry quantitative analysis system. For detailed analytical methods, please see the Supplemental Methods. Immunohistochemistry The pretreatment of paraffin tissue sections was performed according to the steps of multicolor immunofluorescence. The primary antibodies used are LPS (HycultBiotech#HM6011,1:1,000) and LTA (GeneTex#16470,1:1,000). The dyes used are DAB kits (Servicebio#G1211). Slides were scanned with Pannoramic Scan II (3D HISTECH), and images were generated with SlideViewer (v2.5, 3D HISTECH). 16S rRNA FISH The 4 µm FFPE tissue slides were routinely deparaffinized and hydrated. Slides were stained for bacterial 16S rRNA (Cy3-labeled EUB338 probes, Future Biotech #FBFPC001) or negative control (Cy3-labeled nonspecific complement probe, Future Biotech #FBFPC001) using the direct fluorescent bacteria in situ hybridization detection kit (Future Biotech #FB0016) according to the manufacturer’s instructions. Slides were scanned with Pannoramic Scan II (3D HISTECH), and images were generated with SlideViewer (v2.5, 3D HISTECH). DNA extraction and sequencing Microbial DNA was extracted from tissue samples using the QIAGEN DNeasy PowerSoil Kit (#47014) according to manufacturer’s protocol. Library construction and sequencing were performed by Novogene Corporation (Beijing, China). The downstream data processing was performed using EasyAmplicon (v1.12), VSEARCH (v2.15.2), and USEARCH (v10.0.240). For detailed analytical methods, please see the Supplemental Methods. Microbiome data analysis The vegan package (v2.5–6) in R (v4.1) was performed for alpha diversity analysis. The unweighted UniFrac distance matrix was generated by USEARCH. Beta diversity was calculated using principal coordinate analysis (PCoA). To visualize the results of diversity analysis, the R software package ggplot2 was used. The compositions of the microbial community in two groups were presented as stacked bar plots at the phylum levels. Analysis of variance was performed with the software package STAMP (v 2.1.3) using the storey false-discovery rate approach for the correction of P values. LEfSe was performed using an online utility ( http://www.ehbio.com/ImageGP/index.php/Home/Index/LEFSe.html ) to analyze the differences of bacterial abundance at different bacterial classification levels. Finally, Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) was used to predict microbial functional signatures. In the PICRUSt analysis, the significant KEGG pathways (level 3) among species were analyzed via Welch’s t -test in STAMP using the Bonferroni correction. Statistical analysis was conducted with Graghpad Prism (v8.3) software. Contamination correction To prevent a false-positive rate, 15 negative controls, including 5 environmental controls, 5 DNA extraction controls, and 5 PCR controls, were prepared and sequenced alongside the tissue samples. In order to correct for contamination, binomial tests were conducted between samples and negative controls to determine their abundance. In negative control samples, the frequency of occurrence of a taxon was used to estimate p of a binomial distribution. For the binomial test, x and n represent the number of occurrences and totals, respectively. In the following analysis, those taxa with a P -value of 0.05 were kept. mRNA-seq Frozen human glioma tissues, adjacent normal brain tissue, and mouse tumor tissues were used for RNA extraction. A profiler service provided by Genergy Biotechnology Corporation (Shanghai, China) was used for RNA-seq. DESeq2 Bioconductor package was used to identify differentially expressed genes (DEGs), and remarkable DEGs were selected based on P < 0.05, FDR < 0.05, and a log2 (fold change) >1. The DEGs were also submitted for Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis, and significant pathways with P < 0.05 were shown. For detailed analytical methods, please see the Supplemental Methods. Metabolomics Frozen human glioma tissues, adjacent normal brain tissue, and mouse tumor tissues were used for metabolomic profiling. Untargeted metabolomics profiling was performed by Biotree Biotechnology Corporation (Shanghai, China). All data were analyzed by LC-MS/MS on the UHPLC system. The differential metabolites were screened by combining the results of the student’s t -test ( P < 0.05) and the Variable Importance in the Projection (VIP > 1) of the first principal component of the OPLS-DA model. Using the Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway database, we performed KEGG annotation of differential metabolites. By mapping the differential metabolites to authoritative databases such as KEGG, PubChem, The Human Metabolome Database (HMDB), after obtaining the matching information of the differential metabolites, we searched the human pathway databases and analyzed the metabolic pathways. For detailed analytical methods, please see the Supplemental Methods. Multi-omics analyses Correlation analysis of differentially expressed genes, differential metabolites, and differential bacteria was performed, calculating the Spearman correlation coefficient using package psych in R. Subsequently, we constructed a network integrating the interactions among differentially expressed genes, metabolites, and bacteria and visualized it using Cytoscape software. Mediation analysis was carried out using the mediation package. Animal experiments Xenograft mouse models were generated by subcutaneous injection of U87-MG cells (a human glioblastoma cell line) into nude mice. U87 cells (1 × 10 6 ) in the logarithmic growth phase in 100 µL of phosphate-buffered saline (PBS) were subcutaneously injected into the right flank of the mice. When the tumor volume reached 200 mm 3 (day 1), mice were randomly divided into four treatment groups (intratumoral injection): PBS group received injection of PBS, Fn group received injection of the 5 × 10 6 CFU Fusobacterium nucleatum (Fn), Fn+ metronidazole (MTZ) group received injection of Fn and MTZ treatment in addition, and MTZ group received injection of MTZ alone. For the Fn group and Fn+MTZ group, the bacteria were initially injected with PBS or metronidazole for 30 minutes before inoculation. On days 2 and 3, treat with metronidazole or a PBS vehicle twice daily, at least 8 h apart. on days 4–7, once a day. On day 8, tumor tissue was obtained and weighed. Tumor tissue was immediately frozen in liquid nitrogen for subsequent experiments. We obtained 110 fresh frozen tissues, 8 paraffin tissues, and 10 stool samples from patients with pathologically confirmed glioma. The clinical information of the patients was shown in Table S1. The stool samples were taken before cancer treatment, and individuals who received preoperative radiation or chemotherapy treatment or had a previous history of glioma were excluded. All samples were collected from Zhujiang Hospital of Southern Medical University (Guangzhou, China). Tissue samples were meticulously collected in sterile cryotubes and promptly transported using a portable liquid nitrogen tank to the Clinical Biobank Center, where they were kept at −196°C for long-term storage until DNA extraction. Fecal samples are rapidly transferred to a −80°C freezer for storage until further use. Athymic BALB/c nude mice (4 weeks; 20–25 g) were purchased from GemPharmatech Corporation (Nanjing, China) and maintained in a specific pathogen-free environment under a 12 h light-dark cycle with free access to food and water. Human glioblastoma lines (U87, U251, and Ln229) were obtained from the Chinese Academy of Sciences. The cells were cultured in Dulbecco’s Modified Eagle Medium (DMEM; Gibco, Carlsbad, CA, USA) supplemented with 10% fetal bovine serum (Gibco). All cell lines were at 37°C and 5% CO2 in a humid environment. Fusobacterium nucleatum ATCC 25586 (Fn) was gifted kindly by Dr. Songhe Guo. Fn was grown anaerobically at 37°C for 48–72 h in blood agar plate (Oxoid, UK). Fn were centrifuged and then suspended to a concentration of 1 × 10 6 colony-forming units (CFUs)/mL with PBS for subsequent experiments. We performed primary isolation of organoids using fresh tumor samples obtained from glioblastoma patients post-surgery. Fresh glioblastoma tissues were rinsed twice with a washing solution and then minced. The tissues were digested with a digestive solution at 37°C. After digestion, the sample was centrifuged, the supernatant was discarded, and the cells were resuspended in Hanks' balanced salt solution (HBSS) and passed through a 100 µm cell strainer. The resulting cell filtrate was centrifuged again, and the supernatant was discarded. The cell pellet was treated with erythrocyte lysis buffer for 10 min. After centrifugation and removal of the supernatant, the cells were resuspended in a medium containing growth factors and matrix gel and seeded into low-adhesion 6-well plates (Corning). Finally, the plates were placed in a 37°C incubator on a shaker for culture. Glioma cells were seeded in 96-well plates at 2,000 cells per well in the growth medium. Cells were untreated or with Fn at a multiplicity of infection (MOI) of 50:1. At 0 h (when the cells adhered), and at 24, 48, 72, and 96 h, 10 µL of CCK-8 reagent (APEXbio) was added to each well, followed by gentle mixing. The plates were then incubated in the dark for 2 h at 37°C. The absorbance at 450 nm was measured using a full-wavelength microplate reader (THERMO Scientific, Multiskan Sky). Glioma cells were seeded in 6-well plates at 800 cells per well. Cells were untreated or with Fn at a multiplicity of infection (MOI) of 50:1 and cultured for 10 days. Subsequently, the culture medium was removed, and the cell colonies were fixed with 4% paraformaldehyde (Biosharp) for 10 min at 25°C, followed by staining with 0.1% crystal violet (BKMAMLAB) for 20 min. Images were taken, and the colonies were counted using ImageJ software. After resuscitating the organoids, the cryopreservative solution was removed by centrifugation, and the cell pellet was resuspended in antibiotic-free organoid medium. Glioma organoids were inoculated into 24-well plates at 40,000 cells/well, either untreated or incubated with bacteria at an MOI of 50:1. After 2 days of culture and formation, each organoid was photographed using a Nikon Ts2FL inverted microscope at 4× magnification for each well. Subsequent images were taken every 24 h. The diameters of the organoids were then measured from the photographs using ImageJ. On the 5th day of co-culture, after imaging, organoids were collected and fixed with 1 mL of 4% paraformaldehyde for 1 h. They were then transferred to 1.5 mL centrifuge tubes, and 20–50 µL of molten agarose was added to the organoid pellet and placed on ice for 30 min to solidify. The organoids were dehydrated according to the tissue processor protocol and embedded in paraffin the next day. Paraffin blocks were sectioned continuously into 8 µm sections using a microtome. The prepared organoid sections were collected, and HE staining and immunofluorescence were performed by Huayin Medical Laboratory Center. Sections of organoids were examined using a Nikon Eclipse Ti2-E inverted fluorescence microscope at 20× magnification, and images were captured in the DAPI, FITC, and TEXRED channels, respectively. Passaged glioblastoma organoids were seeded into low-adhesion 96-well plates (Corning) at 10,000 cells/well. The organoids were either left untreated or incubated with bacteria at an MOI of 50:1 and cultured on a shaker in a 37°C incubator. Two blank control groups without organoids were also set up. At 0, 24, 48, 72, and 96 h, the ATP levels were measured using an ATP assay kit (abs50059, Absin) according to the manufacturer’s instructions. A 50 µg tumor tissue sample was homogenized using a tissue grinder (LUKA, LUKYM II) and then centrifuged to collect the supernatant. For U87 cells and organoids, they were sonicated at a specific power setting and subjected to repeated freeze-thaw cycles at −20°C and room temperature to obtain the supernatant. According to the manufacturer’s protocol (Jiangsu Jingmei Biological Technology Co., Ltd.), 10 µL of each sample and 40 µL of sample diluent were added to a 96-well plate for ELISA analysis. Finally, the absorbance at 450 nm was measured for each well. Four-micromolar paraffin sections were deparaffinized in xylene and rehydrated in a series of graded alcohols. Antigen extraction was performed in citrate buffer (pH 6), boiled at high power in a microwave oven for 20 s, maintained at low power for 5 min at low boiling state, and cooled naturally after turning off the heat. Multiplex fluorescence labeling was performed using Tyramide signal amplification-dendron-fluorophores with NEON 7-color Allround Discovery Kit for FFPE (Histova Biotechnology). Multiplex antibody panels applied in this study are CD68 (abcam#213363, 1:200), GFAP (abcam#68428, 1:200), LPS (Lipopolysaccharide Core, HycultBiotech#HM6011, 1:200), DAPI (abcam#ab104139). After detecting all antibodies, images were taken using TissueFAXS imaging software (v7.134) and viewed with TissueFAXS Viewer software, and fluorescent positive cells were counted using StrataQuest tissue flow cytometry quantitative analysis system. For detailed analytical methods, please see the Supplemental Methods. The pretreatment of paraffin tissue sections was performed according to the steps of multicolor immunofluorescence. The primary antibodies used are LPS (HycultBiotech#HM6011,1:1,000) and LTA (GeneTex#16470,1:1,000). The dyes used are DAB kits (Servicebio#G1211). Slides were scanned with Pannoramic Scan II (3D HISTECH), and images were generated with SlideViewer (v2.5, 3D HISTECH). The 4 µm FFPE tissue slides were routinely deparaffinized and hydrated. Slides were stained for bacterial 16S rRNA (Cy3-labeled EUB338 probes, Future Biotech #FBFPC001) or negative control (Cy3-labeled nonspecific complement probe, Future Biotech #FBFPC001) using the direct fluorescent bacteria in situ hybridization detection kit (Future Biotech #FB0016) according to the manufacturer’s instructions. Slides were scanned with Pannoramic Scan II (3D HISTECH), and images were generated with SlideViewer (v2.5, 3D HISTECH). Microbial DNA was extracted from tissue samples using the QIAGEN DNeasy PowerSoil Kit (#47014) according to manufacturer’s protocol. Library construction and sequencing were performed by Novogene Corporation (Beijing, China). The downstream data processing was performed using EasyAmplicon (v1.12), VSEARCH (v2.15.2), and USEARCH (v10.0.240). For detailed analytical methods, please see the Supplemental Methods. The vegan package (v2.5–6) in R (v4.1) was performed for alpha diversity analysis. The unweighted UniFrac distance matrix was generated by USEARCH. Beta diversity was calculated using principal coordinate analysis (PCoA). To visualize the results of diversity analysis, the R software package ggplot2 was used. The compositions of the microbial community in two groups were presented as stacked bar plots at the phylum levels. Analysis of variance was performed with the software package STAMP (v 2.1.3) using the storey false-discovery rate approach for the correction of P values. LEfSe was performed using an online utility ( http://www.ehbio.com/ImageGP/index.php/Home/Index/LEFSe.html ) to analyze the differences of bacterial abundance at different bacterial classification levels. Finally, Phylogenetic Investigation of Communities by Reconstruction of Unobserved States (PICRUSt) was used to predict microbial functional signatures. In the PICRUSt analysis, the significant KEGG pathways (level 3) among species were analyzed via Welch’s t -test in STAMP using the Bonferroni correction. Statistical analysis was conducted with Graghpad Prism (v8.3) software. To prevent a false-positive rate, 15 negative controls, including 5 environmental controls, 5 DNA extraction controls, and 5 PCR controls, were prepared and sequenced alongside the tissue samples. In order to correct for contamination, binomial tests were conducted between samples and negative controls to determine their abundance. In negative control samples, the frequency of occurrence of a taxon was used to estimate p of a binomial distribution. For the binomial test, x and n represent the number of occurrences and totals, respectively. In the following analysis, those taxa with a P -value of 0.05 were kept. Frozen human glioma tissues, adjacent normal brain tissue, and mouse tumor tissues were used for RNA extraction. A profiler service provided by Genergy Biotechnology Corporation (Shanghai, China) was used for RNA-seq. DESeq2 Bioconductor package was used to identify differentially expressed genes (DEGs), and remarkable DEGs were selected based on P < 0.05, FDR < 0.05, and a log2 (fold change) >1. The DEGs were also submitted for Kyoto Encyclopedia of Genes and Genomes (KEGG) analysis, and significant pathways with P < 0.05 were shown. For detailed analytical methods, please see the Supplemental Methods. Frozen human glioma tissues, adjacent normal brain tissue, and mouse tumor tissues were used for metabolomic profiling. Untargeted metabolomics profiling was performed by Biotree Biotechnology Corporation (Shanghai, China). All data were analyzed by LC-MS/MS on the UHPLC system. The differential metabolites were screened by combining the results of the student’s t -test ( P < 0.05) and the Variable Importance in the Projection (VIP > 1) of the first principal component of the OPLS-DA model. Using the Kyoto Encyclopedia of Genes and Genomes (KEGG) Pathway database, we performed KEGG annotation of differential metabolites. By mapping the differential metabolites to authoritative databases such as KEGG, PubChem, The Human Metabolome Database (HMDB), after obtaining the matching information of the differential metabolites, we searched the human pathway databases and analyzed the metabolic pathways. For detailed analytical methods, please see the Supplemental Methods. Correlation analysis of differentially expressed genes, differential metabolites, and differential bacteria was performed, calculating the Spearman correlation coefficient using package psych in R. Subsequently, we constructed a network integrating the interactions among differentially expressed genes, metabolites, and bacteria and visualized it using Cytoscape software. Mediation analysis was carried out using the mediation package. Xenograft mouse models were generated by subcutaneous injection of U87-MG cells (a human glioblastoma cell line) into nude mice. U87 cells (1 × 10 6 ) in the logarithmic growth phase in 100 µL of phosphate-buffered saline (PBS) were subcutaneously injected into the right flank of the mice. When the tumor volume reached 200 mm 3 (day 1), mice were randomly divided into four treatment groups (intratumoral injection): PBS group received injection of PBS, Fn group received injection of the 5 × 10 6 CFU Fusobacterium nucleatum (Fn), Fn+ metronidazole (MTZ) group received injection of Fn and MTZ treatment in addition, and MTZ group received injection of MTZ alone. For the Fn group and Fn+MTZ group, the bacteria were initially injected with PBS or metronidazole for 30 minutes before inoculation. On days 2 and 3, treat with metronidazole or a PBS vehicle twice daily, at least 8 h apart. on days 4–7, once a day. On day 8, tumor tissue was obtained and weighed. Tumor tissue was immediately frozen in liquid nitrogen for subsequent experiments. Study design To investigate tumor-associated microbial communities, we enrolled 50 glioma patients in this study. Fifty glioma and 15 matched adjacent normal brain fresh frozen tissues were used for 16S rRNA gene sequencing. We simultaneously collected 15 negative control samples, including 5 environmental controls, 5 DNA extraction controls, and 5 PCR controls for contamination filtering. Twenty glioma and six matched adjacent normal brain fresh frozen tissues were used for untargeted metabolomics analysis. Sixteen glioma and three matched adjacent normal brain fresh frozen tissues were used for transcriptomics sequencing. Four glioma and four adjacent normal brain paraffin tissues were used for bacterial imaging. To further elucidate the role of Fusobacterium nucleatum , the most renowned tumor-associated species within the Fusobacterium genus, we designed an in vivo animal model experiment. A schematic diagram of the design of the entire study was shown in . Profiling of microbiota in human glioma tissues To determine the homogeneity of the microbiota within glioma tissues, we conducted a survey of microbial diversity in 35 unmatched tissues and 15 matched tissues and found no significant difference in α and β diversity between the two groups. Therefore, in the subsequent analysis, we selected 50 glioma samples as representatives of the glioma group (see Fig. S1a and b in the supplemental material). In our present microbiome investigation, overall alpha diversity of the G group was significantly higher than that of the NAT group . Employing the unconstrained principal coordinate analysis (PCoA) with Bray-Curtis distance analysis, we demonstrated a separation between the glioma-associated microbiota and those present in adjacent normal brain tissue . The results showed the microbial composition between G and NAT was markedly different. Specifically, we found that at the phylum level, the tumor-associated microbiota was dominated by Firmicutes and Proteobacteria , followed by Actinobacteria , Fusobacteria, and Bacteroidetes . The relative abundance of the phyla Firmicutes and Fusobacteria was greater in the G group than in the NAT group, whereas the Proteobacteria phylum exhibited an inverse relationship . These observations align with previous reports on investigations of the potential brain microbiome in Alzheimer’s disease . We performed linear discriminant effect size analysis (LEfSe) to identify potential glioma biomarkers in the intratumoral microbiota. We found 50 discriminatory OTUs as key discriminants, including six genera such as Fusobacterium, Longibaculum , Intestinimonas , Pasteurella , Limosilactobacillus, and Arthrobacter (all with LDA scores (log10) >4) . These genera were significantly enriched in the G group. To characterize functional alterations in intratumoral bacteria, we used PICRUSt to predict functional orthologs between the G and NAT groups based on the Kyoto Encyclopedia of Genes and Genomes (KEGG). Following Bonferroni correction, we found 10 pathways enriched between G group and NAT group, including neuroactive ligand-receptor interaction . To assess potential contributing factors to microbial diversity, we conducted stratification analysis by clinical features, including age, sex, tumor size, WHO grade, and Ki-67. Interestingly, no significant relationship was observed between alpha diversity and these clinical characteristics (see Fig. S2a through e in the supplemental material); however, beta diversity was found to be significantly associated with WHO grade of glioma and Ki-67 expression (see Fig. S2f and g). Additionally, to explore the relationship between intratumoral microbiota and gut microbiota, we collected fecal samples from the same cohort of patients and performed 16S rRNA sequencing. Strikingly, we discovered significant differences in β diversity between tumor tissue and fecal samples from glioma patients. Conversely, the differences in α diversity were not found to be significant (see Fig. S3a and b in the supplemental material). Morphological characteristics of bacteria in human glioma tissues To further validate the existence of bacteria in glioma tissues, we employed serial sections for immunohistochemical staining of bacterial lipoteichoic acid (LTA) and lipopolysaccharide (LPS), as well as FISH staining for 16S rRNA. Remarkably, we observed the presence of bacterial LPS and RNA signals at the same location, whereas LTA signal was not detected . Subsequently, to determine the localization of bacteria in glioma tissue, we quantified the cells co-expressing GFAP and LPS, as well as CD68 and LPS, using multicolor immunofluorescence staining on tissue sections. Statistical analysis revealed a significantly higher abundance of GFAP + LPS + cells compared to CD68 + LPS + cells in glioma tissues . This observation suggests that bacterial LPS is more prevalent in tumor cells than macrophages. Additionally, we performed simultaneous multicolor immunofluorescence on adjacent normal brain tissue, revealing a lower bacterial LPS signal compared to glioma tissue (see Fig. S4a and b in the supplemental material). Profiling of microbiota-associated host gene expression in human glioma tissues To understand the potential interactions between intratumoral microbiota and host differential gene expression, we conducted transcriptome sequencing on the same set of samples and assessed their associations using Spearman correlation analysis. First, principal component analysis (PCA) revealed a significant separation between G and NAT groups (see Fig. S5a in the supplemental material). Furthermore, we observed remarkable alterations in the abundance of various mRNAs within glioma tissues compared to adjacent normal brain tissues (see Fig. S5b). According to our definition, we identified 594 differentially expressed genes (see Table S2). The KEGG pathway enrichment analysis was performed to analyze these two groups of differentially expressed genes. Figure S5c shows the top 30 enriched terms, including neuroactive ligand-receptor interaction and glioma. Next, we performed spearman analysis to investigate the potential relationships between all the differentially expressed genes and alpha diversity. Intriguingly, we identified 52 differentially expressed genes that exhibited significant correlations with alpha diversity . Building upon these findings, we further explored the association between these 52 differentially expressed genes and 6 differential bacteria. Notably, our results revealed that 28 of these differentially expressed genes displayed a predominantly negative correlation with differential bacteria . Importantly, these 28 differentially expressed genes belong to the downregulated genes in glioma, and KEGG pathway enrichment analysis demonstrated their enrichment in pathways such as cholinergic synapse, serotonergic synapse, glutamatergic synapse, and dopaminergic synapse . To summarize, we found that these differentially expressed genes related to differential bacteria in glioma are enriched in several pathways related to synapses, suggesting a potential interaction between intratumoral bacteria and host, which may be associated with synaptic activity. Profiling of microbiota-associated metabolites in human glioma tissues We conducted metabolomic assays on the same set of glioma tissues and identified microbe-associated metabolites with microbiomic data. Remarkably, the PCA plots demonstrated a clear separation of the metabolome between G and NAT groups, both in ES + (electrospray ionization positive mode in mass spectrometry) and ES (electrospray ionization negative mode in mass spectrometry) (see Fig. S6a and b in the supplemental material). Furthermore, the OPLS-DA score scatterplots exhibited superior disjunction between the G and NAT groups, further confirming the distinct metabolic profiles of glioma tissues compared to adjacent normal tissue controls, in both ES+ and ES− . To identify differential metabolites, the first principal component of variable importance in the projection (VIP) of the OPLS-DA model was obtained. We identified metabolites with VIP >1 and P < 0.05 as differential metabolites. We identified 79 differential metabolites in ES+ and 44 differential metabolites in ES− (see Table S3 in the supplemental material), which were visualized with a hierarchical clustering heatmap (see Fig. S6c and d). Subsequently, all the differential metabolites underwent regulatory pathway analysis to identify the metabolic pathways that were highly correlated with the metabolites. Our analysis revealed six significantly abnormal metabolic pathways in G group, including aminoacyl-tRNA biosynthesis, arginine and proline metabolism, nitrogen metabolism, taurine and hypotaurine metabolism, alanine, aspartate and glutamate metabolism, and pyrimidine metabolism. . To further unravel the metabolites associated with the intratumoral microbiota in glioma, we performed a spearman analysis between all the differential metabolites and alpha diversity. Remarkably, we identified 16 metabolites that exhibited correlations with alpha diversity . Subsequently, we explored the associations between these 16 differential metabolites and six differential bacteria. Our results showed some interesting correlations: (R)- N -methylsalsolinol displayed positive correlations with Longibaculum , Limosilactobacillus, and Intestinimonas , while N -acetylglutamic acid, N -acetyl- l -aspartic acid, N -acetylaspartylglutamic acid, and d -alanine exhibited negative correlations with Arthrobacter . It is worth noting that (R)- N -methylsalsolinol, a dopaminergic neurotoxin, has been found to increase in cerebrospinal fluid of Parkinson’s disease . N -acetylaspartylglutamic acid (NAAG) and N-acetyl- l -aspartic acid (NAA), known as neurotransmitters and their precursor substances, have been reported to inhibit the differentiation of glioma stem cells . Additionally, D-Alanine, a peptidoglycan constituent in bacterial cell walls, has implications as a biomarker and treatment for schizophrenia . Integrated multi-omics analysis of human glioma tissues As demonstrated in the previous sections, we have identified significant correlations between multiple genes, metabolites, and the microbiota within glioma. In this section, we further explored the complex interactions among these three factors. Network analysis unveiled a meaningful correlation, forming interconnected networks between intratumoral microbiota, tissue metabolites, and host genes . To assess whether genes mediate microbial effects on tumor metabolism, we performed a mediation analysis. Remarkably, we found that 5-hydroxytryptamine receptor 1D (HTR1D) and signal transducer and activator of transcription 4 (STAT4) are associated with a majority of the characteristic bacteria and differential metabolites . However, the mediation effects of the 10 pathways mediated by HTR1D and STAT4 were not statistically significant ( P mediation > 0.05) (see Fig. S7a and b in the supplemental material). Further investigation focused on evaluating the function of metabolites in mediating the impact of microbiota on host gene expression. Our results indicated that N -acetylglutamic-acid, PC(22:5(7Z,10Z,13Z,16Z,19Z)/22:4(7Z,10Z,13Z,16Z)) and (R)- N -methylsalsolinol displayed correlations with some characteristic bacteria and some differentially expressed genes . Notably, mediation analysis showed that Arthrobacter causally contributed to riboflavin kinase (RFK) through N -acetylglutamic-acid ( P mediation = 0.02) . Additionally, Longibaculum causally contributed to glutamate ionotropic receptor NMDA type subunit 2B (GRIN2B) through PC(22:5(7Z,10Z,13Z,16Z,19Z)/22:4(7Z,10Z,13Z,16Z)) ( P mediation = 0.016) . Moreover, Limosilactobacillus causally contributed to ribosomal modification protein rimK like family member A (RIMKLA) through (R)- N -methylsalsolinol ( P mediation = 0.042) . N -acetylglutamic-acid (NAG) has been found to be involved in the regulation of NAAG degradation . RFK, also known as riboflavin kinase, is the enzyme responsible for synthesizing flavin mononucleotide (FMN) . Evidence suggests that FMN can improve the degeneration of dopaminergic neurons . PC(22:5(7Z,10Z,13Z,16Z,19Z)/22:4(7Z,10Z,13Z,16Z)) represents a phosphatidylcholine containing docosapentaenoic acid (DPA), which has been detected in the metabolomics of a variety of neurologic diseases . DPA is an omega-3 polyunsaturated fatty acid with protective effects on neurons . GRIN2B is a subunit of the NMDA receptor, which plays a crucial role in neural development , and is implicated for the communication between gut microbiota and the brain . RIMKLA, also known as ribosomal modification protein rimK-like family member A, has been identified as the synthetase of NAAG . In conclusion, our integrative analysis indicated that intratumoral microbiota of glioma may affect the expression of neuron-related genes through some metabolites related to neuronal function. Integration of the transcriptome and metabolome analysis reveals the potential role of Fusobacterium nucleatum in glioma mouse model Remarkably, we detected an abundance of Fusobacterium in glioma tumor tissue . To further confirm the presence of Fusobacterium in glioma, we stained human glioma tissue and matched adjacent normal tissue samples using Cy3-labeled Fusobacterium probe. As shown by the results, Fusobacterium levels in the tumor samples were higher than those in the matched normal brain tissue (see Fig. S8a and b in the supplemental material). Given that Fusobacterium nucleatum (Fn) accelerates the development of colorectal and breast cancers, we used the subcutaneous glioma xenograft mouse model to examine whether it also affects tumor progression in glioma, following the scheme in . Briefly, mice were divided into four groups and performed intratumoral injections of PBS, Fn, metronidazole (MTZ), or Fn combined with MTZ. shows that Fn group had a much larger tumor weight than the PBS and Fn+MTZ groups. Consistently, the trend persisted when evaluating tumor size . These results suggest that Fn accelerates tumor growth, while Fn-induced tumor exacerbation can be prevented by metronidazole treatment. To gain deeper insights into the specific mechanism underlying Fn promotion of glioma growth, we collected tumor tissues from the four mouse groups and performed a transcriptomic analysis. The differentially expressed genes among these groups are detailed in Table S4 in the supplemental material. Venn diagram revealed that there were 70 genes overlapping between Fn vs PBS group and Fn vs Fn + MTZ group, which were not in Fn + MTZ vs MTZ group (see Fig. S9a). Subsequently, we performed KEGG pathway enrichment analysis on these 70 genes, leading to the identification of significant gathering of IL-17 signaling pathway and TNF signaling pathway . Since CCL2, CXCL1, and CXCL2 were enriched in both pathways, we performed differential analysis of these three genes. The results revealed that the expression levels of CCL2, CXCL1, and CXCL2 of the Fn group were markedly higher compared to those in the other three groups . Notably, previous studies have revealed the close association of these genes with Fn in promoting tumor progression . In addition, the ELISA assay validated the results of the differential gene expression analysis of CCL2, CXCL1, and CXCL2 in these four groups . Furthermore, we conducted metabolomic analyses on four groups of mouse tumor tissues that received distinct treatments. The differential metabolites among these groups are detailed in Table S5 in the supplemental material. Venn diagram revealed the presence of 11 metabolites that overlapped between Fn vs PBS group and Fn vs Fn + MTZ group but were not observed in Fn + MTZ vs MTZ group (see Fig. S9b). Subsequently, we performed a correlation analysis between these 11 differential metabolites and the 70 differentially expressed genes. Notably, N -glycolylneuraminic acid exhibited a strong correlation with numerous differentially expressed genes . Furthermore, visually illustrates that the expression of N -glycolylneuraminic acid in the Fn group was remarkably higher compared to those in the other three groups. Remarkably, as a sialic acid, N -glycolylneuraminic acid (Neu5Gc) is regarded as a potential human cancer biomarker . Previous studies have reported the abundance of Neu5Gc in mouse brain tumor tissues . Together, these data suggested that Fn promoted glioma growth by increasing the levels of N -acetylneuraminic acid and the expression of CCL2, CXCL1, and CXCL2. Fusobacterium nucleatum promotes glioma proliferation and upregulates CXCL2 levels in an in vitro model To better simulate clinical conditions, we established patient-derived glioma organoid models to further validate the effect of Fn on promoting glioma proliferation and co-cultured it with Fn . Then, we measured glioma organoid viability using the ATP assay, following the methods outlined in previous research . The results demonstrated that organoids co-cultured with Fn had higher viability than the control group . After 2 days of co-culture, we observed that the organoids co-cultured with Fn exhibited larger diameters than the control group . After 5 days of co-culture, we collected the organoids, prepared paraffin-embedded sections, and performed Ki67 staining. The results showed higher Ki67 expression in the Fn co-cultured organoids compared to the control group . Similarly, we also tested the effect of Fn on glioma cell proliferation in different glioma cell lines. Consistently, glioma cells co-cultured with Fn exhibited higher proliferation rates compared to the control group (see Fig. S10a and b in the supplemental material). These results collectively indicate that Fn treatment promotes glioma proliferation. To investigate tumor-associated microbial communities, we enrolled 50 glioma patients in this study. Fifty glioma and 15 matched adjacent normal brain fresh frozen tissues were used for 16S rRNA gene sequencing. We simultaneously collected 15 negative control samples, including 5 environmental controls, 5 DNA extraction controls, and 5 PCR controls for contamination filtering. Twenty glioma and six matched adjacent normal brain fresh frozen tissues were used for untargeted metabolomics analysis. Sixteen glioma and three matched adjacent normal brain fresh frozen tissues were used for transcriptomics sequencing. Four glioma and four adjacent normal brain paraffin tissues were used for bacterial imaging. To further elucidate the role of Fusobacterium nucleatum , the most renowned tumor-associated species within the Fusobacterium genus, we designed an in vivo animal model experiment. A schematic diagram of the design of the entire study was shown in . To determine the homogeneity of the microbiota within glioma tissues, we conducted a survey of microbial diversity in 35 unmatched tissues and 15 matched tissues and found no significant difference in α and β diversity between the two groups. Therefore, in the subsequent analysis, we selected 50 glioma samples as representatives of the glioma group (see Fig. S1a and b in the supplemental material). In our present microbiome investigation, overall alpha diversity of the G group was significantly higher than that of the NAT group . Employing the unconstrained principal coordinate analysis (PCoA) with Bray-Curtis distance analysis, we demonstrated a separation between the glioma-associated microbiota and those present in adjacent normal brain tissue . The results showed the microbial composition between G and NAT was markedly different. Specifically, we found that at the phylum level, the tumor-associated microbiota was dominated by Firmicutes and Proteobacteria , followed by Actinobacteria , Fusobacteria, and Bacteroidetes . The relative abundance of the phyla Firmicutes and Fusobacteria was greater in the G group than in the NAT group, whereas the Proteobacteria phylum exhibited an inverse relationship . These observations align with previous reports on investigations of the potential brain microbiome in Alzheimer’s disease . We performed linear discriminant effect size analysis (LEfSe) to identify potential glioma biomarkers in the intratumoral microbiota. We found 50 discriminatory OTUs as key discriminants, including six genera such as Fusobacterium, Longibaculum , Intestinimonas , Pasteurella , Limosilactobacillus, and Arthrobacter (all with LDA scores (log10) >4) . These genera were significantly enriched in the G group. To characterize functional alterations in intratumoral bacteria, we used PICRUSt to predict functional orthologs between the G and NAT groups based on the Kyoto Encyclopedia of Genes and Genomes (KEGG). Following Bonferroni correction, we found 10 pathways enriched between G group and NAT group, including neuroactive ligand-receptor interaction . To assess potential contributing factors to microbial diversity, we conducted stratification analysis by clinical features, including age, sex, tumor size, WHO grade, and Ki-67. Interestingly, no significant relationship was observed between alpha diversity and these clinical characteristics (see Fig. S2a through e in the supplemental material); however, beta diversity was found to be significantly associated with WHO grade of glioma and Ki-67 expression (see Fig. S2f and g). Additionally, to explore the relationship between intratumoral microbiota and gut microbiota, we collected fecal samples from the same cohort of patients and performed 16S rRNA sequencing. Strikingly, we discovered significant differences in β diversity between tumor tissue and fecal samples from glioma patients. Conversely, the differences in α diversity were not found to be significant (see Fig. S3a and b in the supplemental material). To further validate the existence of bacteria in glioma tissues, we employed serial sections for immunohistochemical staining of bacterial lipoteichoic acid (LTA) and lipopolysaccharide (LPS), as well as FISH staining for 16S rRNA. Remarkably, we observed the presence of bacterial LPS and RNA signals at the same location, whereas LTA signal was not detected . Subsequently, to determine the localization of bacteria in glioma tissue, we quantified the cells co-expressing GFAP and LPS, as well as CD68 and LPS, using multicolor immunofluorescence staining on tissue sections. Statistical analysis revealed a significantly higher abundance of GFAP + LPS + cells compared to CD68 + LPS + cells in glioma tissues . This observation suggests that bacterial LPS is more prevalent in tumor cells than macrophages. Additionally, we performed simultaneous multicolor immunofluorescence on adjacent normal brain tissue, revealing a lower bacterial LPS signal compared to glioma tissue (see Fig. S4a and b in the supplemental material). To understand the potential interactions between intratumoral microbiota and host differential gene expression, we conducted transcriptome sequencing on the same set of samples and assessed their associations using Spearman correlation analysis. First, principal component analysis (PCA) revealed a significant separation between G and NAT groups (see Fig. S5a in the supplemental material). Furthermore, we observed remarkable alterations in the abundance of various mRNAs within glioma tissues compared to adjacent normal brain tissues (see Fig. S5b). According to our definition, we identified 594 differentially expressed genes (see Table S2). The KEGG pathway enrichment analysis was performed to analyze these two groups of differentially expressed genes. Figure S5c shows the top 30 enriched terms, including neuroactive ligand-receptor interaction and glioma. Next, we performed spearman analysis to investigate the potential relationships between all the differentially expressed genes and alpha diversity. Intriguingly, we identified 52 differentially expressed genes that exhibited significant correlations with alpha diversity . Building upon these findings, we further explored the association between these 52 differentially expressed genes and 6 differential bacteria. Notably, our results revealed that 28 of these differentially expressed genes displayed a predominantly negative correlation with differential bacteria . Importantly, these 28 differentially expressed genes belong to the downregulated genes in glioma, and KEGG pathway enrichment analysis demonstrated their enrichment in pathways such as cholinergic synapse, serotonergic synapse, glutamatergic synapse, and dopaminergic synapse . To summarize, we found that these differentially expressed genes related to differential bacteria in glioma are enriched in several pathways related to synapses, suggesting a potential interaction between intratumoral bacteria and host, which may be associated with synaptic activity. We conducted metabolomic assays on the same set of glioma tissues and identified microbe-associated metabolites with microbiomic data. Remarkably, the PCA plots demonstrated a clear separation of the metabolome between G and NAT groups, both in ES + (electrospray ionization positive mode in mass spectrometry) and ES (electrospray ionization negative mode in mass spectrometry) (see Fig. S6a and b in the supplemental material). Furthermore, the OPLS-DA score scatterplots exhibited superior disjunction between the G and NAT groups, further confirming the distinct metabolic profiles of glioma tissues compared to adjacent normal tissue controls, in both ES+ and ES− . To identify differential metabolites, the first principal component of variable importance in the projection (VIP) of the OPLS-DA model was obtained. We identified metabolites with VIP >1 and P < 0.05 as differential metabolites. We identified 79 differential metabolites in ES+ and 44 differential metabolites in ES− (see Table S3 in the supplemental material), which were visualized with a hierarchical clustering heatmap (see Fig. S6c and d). Subsequently, all the differential metabolites underwent regulatory pathway analysis to identify the metabolic pathways that were highly correlated with the metabolites. Our analysis revealed six significantly abnormal metabolic pathways in G group, including aminoacyl-tRNA biosynthesis, arginine and proline metabolism, nitrogen metabolism, taurine and hypotaurine metabolism, alanine, aspartate and glutamate metabolism, and pyrimidine metabolism. . To further unravel the metabolites associated with the intratumoral microbiota in glioma, we performed a spearman analysis between all the differential metabolites and alpha diversity. Remarkably, we identified 16 metabolites that exhibited correlations with alpha diversity . Subsequently, we explored the associations between these 16 differential metabolites and six differential bacteria. Our results showed some interesting correlations: (R)- N -methylsalsolinol displayed positive correlations with Longibaculum , Limosilactobacillus, and Intestinimonas , while N -acetylglutamic acid, N -acetyl- l -aspartic acid, N -acetylaspartylglutamic acid, and d -alanine exhibited negative correlations with Arthrobacter . It is worth noting that (R)- N -methylsalsolinol, a dopaminergic neurotoxin, has been found to increase in cerebrospinal fluid of Parkinson’s disease . N -acetylaspartylglutamic acid (NAAG) and N-acetyl- l -aspartic acid (NAA), known as neurotransmitters and their precursor substances, have been reported to inhibit the differentiation of glioma stem cells . Additionally, D-Alanine, a peptidoglycan constituent in bacterial cell walls, has implications as a biomarker and treatment for schizophrenia . As demonstrated in the previous sections, we have identified significant correlations between multiple genes, metabolites, and the microbiota within glioma. In this section, we further explored the complex interactions among these three factors. Network analysis unveiled a meaningful correlation, forming interconnected networks between intratumoral microbiota, tissue metabolites, and host genes . To assess whether genes mediate microbial effects on tumor metabolism, we performed a mediation analysis. Remarkably, we found that 5-hydroxytryptamine receptor 1D (HTR1D) and signal transducer and activator of transcription 4 (STAT4) are associated with a majority of the characteristic bacteria and differential metabolites . However, the mediation effects of the 10 pathways mediated by HTR1D and STAT4 were not statistically significant ( P mediation > 0.05) (see Fig. S7a and b in the supplemental material). Further investigation focused on evaluating the function of metabolites in mediating the impact of microbiota on host gene expression. Our results indicated that N -acetylglutamic-acid, PC(22:5(7Z,10Z,13Z,16Z,19Z)/22:4(7Z,10Z,13Z,16Z)) and (R)- N -methylsalsolinol displayed correlations with some characteristic bacteria and some differentially expressed genes . Notably, mediation analysis showed that Arthrobacter causally contributed to riboflavin kinase (RFK) through N -acetylglutamic-acid ( P mediation = 0.02) . Additionally, Longibaculum causally contributed to glutamate ionotropic receptor NMDA type subunit 2B (GRIN2B) through PC(22:5(7Z,10Z,13Z,16Z,19Z)/22:4(7Z,10Z,13Z,16Z)) ( P mediation = 0.016) . Moreover, Limosilactobacillus causally contributed to ribosomal modification protein rimK like family member A (RIMKLA) through (R)- N -methylsalsolinol ( P mediation = 0.042) . N -acetylglutamic-acid (NAG) has been found to be involved in the regulation of NAAG degradation . RFK, also known as riboflavin kinase, is the enzyme responsible for synthesizing flavin mononucleotide (FMN) . Evidence suggests that FMN can improve the degeneration of dopaminergic neurons . PC(22:5(7Z,10Z,13Z,16Z,19Z)/22:4(7Z,10Z,13Z,16Z)) represents a phosphatidylcholine containing docosapentaenoic acid (DPA), which has been detected in the metabolomics of a variety of neurologic diseases . DPA is an omega-3 polyunsaturated fatty acid with protective effects on neurons . GRIN2B is a subunit of the NMDA receptor, which plays a crucial role in neural development , and is implicated for the communication between gut microbiota and the brain . RIMKLA, also known as ribosomal modification protein rimK-like family member A, has been identified as the synthetase of NAAG . In conclusion, our integrative analysis indicated that intratumoral microbiota of glioma may affect the expression of neuron-related genes through some metabolites related to neuronal function. Fusobacterium nucleatum in glioma mouse model Remarkably, we detected an abundance of Fusobacterium in glioma tumor tissue . To further confirm the presence of Fusobacterium in glioma, we stained human glioma tissue and matched adjacent normal tissue samples using Cy3-labeled Fusobacterium probe. As shown by the results, Fusobacterium levels in the tumor samples were higher than those in the matched normal brain tissue (see Fig. S8a and b in the supplemental material). Given that Fusobacterium nucleatum (Fn) accelerates the development of colorectal and breast cancers, we used the subcutaneous glioma xenograft mouse model to examine whether it also affects tumor progression in glioma, following the scheme in . Briefly, mice were divided into four groups and performed intratumoral injections of PBS, Fn, metronidazole (MTZ), or Fn combined with MTZ. shows that Fn group had a much larger tumor weight than the PBS and Fn+MTZ groups. Consistently, the trend persisted when evaluating tumor size . These results suggest that Fn accelerates tumor growth, while Fn-induced tumor exacerbation can be prevented by metronidazole treatment. To gain deeper insights into the specific mechanism underlying Fn promotion of glioma growth, we collected tumor tissues from the four mouse groups and performed a transcriptomic analysis. The differentially expressed genes among these groups are detailed in Table S4 in the supplemental material. Venn diagram revealed that there were 70 genes overlapping between Fn vs PBS group and Fn vs Fn + MTZ group, which were not in Fn + MTZ vs MTZ group (see Fig. S9a). Subsequently, we performed KEGG pathway enrichment analysis on these 70 genes, leading to the identification of significant gathering of IL-17 signaling pathway and TNF signaling pathway . Since CCL2, CXCL1, and CXCL2 were enriched in both pathways, we performed differential analysis of these three genes. The results revealed that the expression levels of CCL2, CXCL1, and CXCL2 of the Fn group were markedly higher compared to those in the other three groups . Notably, previous studies have revealed the close association of these genes with Fn in promoting tumor progression . In addition, the ELISA assay validated the results of the differential gene expression analysis of CCL2, CXCL1, and CXCL2 in these four groups . Furthermore, we conducted metabolomic analyses on four groups of mouse tumor tissues that received distinct treatments. The differential metabolites among these groups are detailed in Table S5 in the supplemental material. Venn diagram revealed the presence of 11 metabolites that overlapped between Fn vs PBS group and Fn vs Fn + MTZ group but were not observed in Fn + MTZ vs MTZ group (see Fig. S9b). Subsequently, we performed a correlation analysis between these 11 differential metabolites and the 70 differentially expressed genes. Notably, N -glycolylneuraminic acid exhibited a strong correlation with numerous differentially expressed genes . Furthermore, visually illustrates that the expression of N -glycolylneuraminic acid in the Fn group was remarkably higher compared to those in the other three groups. Remarkably, as a sialic acid, N -glycolylneuraminic acid (Neu5Gc) is regarded as a potential human cancer biomarker . Previous studies have reported the abundance of Neu5Gc in mouse brain tumor tissues . Together, these data suggested that Fn promoted glioma growth by increasing the levels of N -acetylneuraminic acid and the expression of CCL2, CXCL1, and CXCL2. promotes glioma proliferation and upregulates CXCL2 levels in an in vitro model To better simulate clinical conditions, we established patient-derived glioma organoid models to further validate the effect of Fn on promoting glioma proliferation and co-cultured it with Fn . Then, we measured glioma organoid viability using the ATP assay, following the methods outlined in previous research . The results demonstrated that organoids co-cultured with Fn had higher viability than the control group . After 2 days of co-culture, we observed that the organoids co-cultured with Fn exhibited larger diameters than the control group . After 5 days of co-culture, we collected the organoids, prepared paraffin-embedded sections, and performed Ki67 staining. The results showed higher Ki67 expression in the Fn co-cultured organoids compared to the control group . Similarly, we also tested the effect of Fn on glioma cell proliferation in different glioma cell lines. Consistently, glioma cells co-cultured with Fn exhibited higher proliferation rates compared to the control group (see Fig. S10a and b in the supplemental material). These results collectively indicate that Fn treatment promotes glioma proliferation. Since the first identification of intratumoral bacteria in solid tumors, the tumor microbiome has been a focus of cancer research. Currently, 16S rRNA gene sequencing, immunohistochemistry staining, and multi-omics techniques that combine genomics, transcriptomics, proteomics, and metabolomics are powerful analytical methods for characterizing the tumor microbiome . In this study, we identified bacteria enriched in glioma tissues by 16S rRNA gene sequencing, verified the presence of bacterial LPS and RNA by morphological experiments, and used multi-omics techniques to explore the interaction between intratumoral microbes and the tumor microenvironment although further experiments are needed to validate these clues. In addition, we also attempted to address the issue of contamination filtering in the tumor microbiome. We mainly followed the contamination filtering method described by Nejman et al. and Fu et al. , which involved collecting negative control samples during sample collection and processing, and performing contamination correction in the analysis. With the growing attention to the gut-brain axis , the gut microbiota has been found to be involved in the development, progression, and treatment of glioma by metabolically modulating the epigenetic and immune microenvironments . Recently, emerging evidence has also shown a potential function of the intratumoral microbiome in tumor behavior and treatment responses . It is of great importance for cancer therapy to elucidate the molecular features of distinct tumor subtypes and predict clinical prognosis from a microbiome perspective. By analyzing 16S rRNA gene sequencing data, we identified six differential bacteria in glioma tissues compared to matched relatively normal brain tissues, including Fusobacterium , Longibaculum , Intestinimonas , Pasteurella , Limosilactobacillus, and Arthrobacter . It is well known that tumor hypoxia, a centerpiece of disease progression mechanisms, is also a common pathological feature in glioma. We found that most of the genera enriched in glioma tissue were anaerobic, which does not seem to be a coincidence. In fact, the high degree of hypoxia in the tumor, immunosuppressive microenvironment, and disturbed vascular system are favorable conditions for rapid bacterial colonization, growth, and replication within the tumor . Furthermore, we observed a marked discrepancy between the microbial composition of fecal samples and tumor samples from the same cohort of glioma patients. This implies that the gut microbiota is not the only possible source of intratumoral bacteria in glioma. These bacteria may also originate from the oral cavity or adjacent brain tissue. We make the following speculations: (i) glioma might change the local microenvironment, such as blood–brain barrier disruption and immunosuppression, which enables bacteria to infiltrate the tumor through the hematogenous or neuronal retrograde pathways. (ii) These bacteria could have been present in the brain tissue prior to tumorigenesis and those that adapted to the tumor microenvironment survive and grow during tumor development. Of course, these speculations should be investigated in future studies. Fusobacterium is a genus of obligate anaerobic rods, and Fusobacterium nucleatum has been reported as a bacterium closely related to tumorigenesis . Fusobacterium was found to be enriched in stool samples from glioma patients . Here, it was found to be enriched in the tumor tissues of glioma patients. Longibaculum , belonging to the bacterial genus within the family Erysipelotrichaceae , has not been extensively investigated in the context of glioma. However, studies have indicated its involvement in weight-independent improvement of blood glucose subsequent to gastric bypass surgery . In a mouse model of glioma, Intestinimonas was reported to exhibit a continuous increase in abundance during the progression of tumor growth in a mouse model of glioma . Remarkably, Intestinimonas possesses distinctive metabolic capabilities that allow it to produce butyrate from both carbohydrates and amino acids . Pasteurella is a genus of opportunistic pathogens, and one of its species, Pasteurella multocida , has been reported to cause bacterial meningitis . Although investigations thus far have not illuminated any direct link between Pasteurella and glioma, the presence of Pasteurella multocida toxin (PMT), characterized by profound mitogenic activity and carcinogenic potential , presents an intriguing possibility. It is conceivable that PMT may serve as a pivotal mediator of the association between Pasteurella and glioma pathogenesis. Limosilactobacillus reuteri , extensively studied for its protective properties , has recently been implicated in the induction of multiple sclerosis . Arthrobacter strain NOR5 has exhibited remarkable proficiency in facilitating the complete degradation of nornicotine , a direct precursor of tobacco-specific nitrosamines (TSNAs) known for their potent carcinogenic properties . Furthermore, Arthrobacter citreus strains can metabolize caprolactam, thereby generating glutamate -an important excitatory neurotransmitter in the central nervous system. Based on these findings, it becomes conceivable that Arthrobacter holds probiotic potential. Notably, our investigation also revealed an association between Arthrobacter and NAA as well as NAAG. It is undeniable that culturing bacteria from fresh glioma tissues is the critical evidence for their presence. Regrettably, our endeavors in bacterial culture encountered setbacks. Our efforts resulted solely in the cultivation of Pseudomonas stutzeri , a bacterium that has exhibited resistance against multiple antibiotics (data not shown). We speculate that this outcome may be attributed to the constraints posed by current medical practices, particularly the widespread utilization of prophylactic antibiotics. Furthermore, a noteworthy observation in our study was the notable elevation of LPS signals in glioma tissue in stark contrast to adjacent normal brain tissue. LPS, a prevalent endotoxin, displays the capacity to interact with Toll-like receptor 4 (TLR4) in vivo , provoking the activation of monocytes and macrophages, thereby instigating the synthesizing and subsequent release of various cytokines and inflammatory mediators . Remarkably, investigations have illustrated that glioblastoma clinical samples exhibit heightened expression levels of TLR4 , corroborating the findings of our study. Moreover, we also found that LPS signals were more prevalent in tumor cell-enriched regions, while macrophage-enriched regions had fewer signals. We propose that this may result from LPS-induced macrophage activation and subsequent phagocytic clearance by macrophages. Notably, alongside bacterial LPS, RNA, and DNA found in human gliomas, recent studies have unraveled the presence of bacterial-derived peptides on human leukocyte antigen molecules in glioblastoma . These peptides elicit strong responses from tumor-infiltrating lymphocytes and peripheral blood memory cells, suggesting that microbial peptides activate tumor-infiltrating lymphocytes in glioblastoma. In addition, we performed transcriptomic sequencing on glioma tissues derived from patients simultaneously. We found that 28 differentially expressed genes associated with six different bacterial genera were enriched in pathways such as serotonergic synapse, cholinergic synapse, glutamatergic synapse, and dopaminergic synapse. The integrated omics analysis revealed that intratumoral microbiome may affect the expression of neuron-related genes through bacteria-associated metabolites. Previous review articles have highlighted the ability of gliomas to exploit normal mechanisms of neuronal development and plasticity, leading to the formation of neuron-glia synapses with subsequent enhancement of glioma proliferation . Building upon these limited but intriguing pieces of evidence, we speculate that the intratumoral bacteria in glioma may be involved in the formation of neuron-glia synapses. Additionally, a noteworthy discovery emerged as we unraveled the association of HTR1D and STAT4 with the characteristic bacteria and the majority of the differential metabolites. HTR1D, a type of 5-hydroxytryptamine (5-HT) receptor, activates intracellular signaling pathways through G proteins upon binding to serotonin . Such signal transduction represents a fundamental process mediated by the monoamine neurotransmitter 5-HT. Consequently, HTR1D may be an important molecule in the neuron-glia synapses involved by intratumoral bacteria of glioma. Concurrently, STAT4 operates as a transcription factor. Prior studies employing the Oncomine database scrutinized the mRNA expression levels of STAT gene family members in glioma, revealing diminished STAT4 mRNA expression in comparison to that observed in normal controls —an observation corroborated by our own inquiry. Of equal import, several studies have unveiled STAT4’s pivotal role in regulating neutrophil function, as demonstrated by the impact of STAT4 deficiency on neutrophil extracellular trap formation and antibacterial immunity . Therefore, within the context of intratumoral bacteria-associated glioma, STAT4 emerges as an indispensable molecule in tumor-associated immune regulation. We found Fn promotes glioma proliferation and upregulates CCL2, CXCL1, and CXCL2 levels in vivo and in vitro models of glioma. A recent study found that Fn induces the secretion of pro-inflammatory cytokines, including CCL2 and CXCL1, through bacterial surface adhesin Fap2 , DNA hunger/stationary phase protective proteins (Dps) , or CXCL2-mediated crosstalk between tumor cells and macrophages . Besides, intratumoral F. nucleatum promotes pancreatic cancer progression through autocrine and paracrine mechanisms of the CXCL1-CXCR2 axis . Additionally, in the glioma microenvironment, upregulation of CCL2 promotes the infiltration of tumor-associated macrophages, which, in turn, promotes the proliferation and survival of glioblastoma cells by transferring LDHA-containing extracellular vesicles . Elevated levels of CXCL1 or CXCL2 promote myeloid cell migration while disrupting the accumulation of CD8 T cells at the tumor site, leading to accelerated glioblastoma progression. These findings provide us with potential mechanisms. We will further explore the specific mechanisms by which Fn upregulates cytokines such as CCL2, CXCL1, and CXCL2 to promote glioma development in future studies. The relationship between intratumoral bacteria and host cells in the tumor microenvironment is a key area of interest. In this study, we conducted a preliminary investigation into the distribution of bacterial LPS signals in tumor cells and macrophages, but LPS and CD68 are not specific markers for bacteria and macrophages. Flow cytometry could provide more accurate quantitative information on bacteria within macrophages and tumor cells to some extent. Proteomics and single-cell RNA sequencing are necessary to further understand the interactions between microbes and the host. In addition, for a more comprehensive understanding, new techniques like spatial metatranscriptomics and InvadeSeq (invasion-adhesion-directed expression sequencing) can be considered. These methods can simultaneously capture both bacterial and host RNA information from tissue sections, providing in situ insights into functional interactions between microbes and host cells in the tumor microenvironment. Furthermore, given that microbes often interact with the host through metabolites, we could also consider using the metaFISH method . This technique combines the high resolution of fluorescence microscopy, the specificity of FISH probes, and high-resolution MALDI-MSI to map the spatial distribution of metabolites at the cellular level. This could help us better understand the communication, defense, and nutrient exchange between host and microbes. These advanced technologies offer new perspectives for investigating microbial-host interactions in future studies. Previous studies have demonstrated the ability of Fusobacterium nucleatum to confer resistance to chemotherapy and the enhancement of PD-L1 blockers in colorectal cancer. Given the enrichment of Fn observed in glioma and its capability to induce the production of chemokines, it becomes crucial to explore the potential associations between Fn and temozolomide, a first-line chemotherapy drug for glioma, as well as the interplay between Fn and PD-L1 in glioma context. These avenues of investigation hold considerable promise for future research endeavors. The advancement of bacterial engineering technology is an exciting opportunity to develop specifically modified bacteria that can serve as anti-tumor carriers. We have previously explored the potential mechanism of genetically engineered bacteria Salmonella YB1 in the treatment of glioma and found that Salmonella YB1 inhibits the expression of glutathione peroxidase-4 and induces ferroptosis to suppress glioma growth . The key bacteria identified in this study may be considered potential engineered bacteria for glioma treatment. Despite the significance of our findings, our study has certain limitations. First, the postulated interactions between the intratumoral bacteria and glioma necessitate validation through targeted experiment investigations. Second, our study sample size was relatively small, highlighting the need for multicenter, large sample studies to elucidate the relationship between key bacteria, their related metabolites, and glioma prognosis. Third, even though we referred to the existing and well-recognized methods for identifying intratumoral microbiota, we still could not completely eliminate the internal contamination from samples and the external contamination from the environment. Therefore, the analytical methods for contamination filtering in the tumor microbiome should be improved and standardized in the future. Nevertheless, our study provides insights into the intricate interplay between intratumoral bacteria and glioma, potentially inspiring new avenues of exploration in glioma biology. Looking ahead, an in-depth study of the intratumoral microbiota holds immense promise for advancing anti-cancer treatment. Conclusion Overall, a multi-omics analysis of human glioma tissue showed that the intratumoral microbiome may affect the expression of neuron-related genes through bacteria-associated metabolites. Both in vivo and in vitro experiments demonstrated that Fn, as a key bacterium enriched in glioma tissue, promotes glioma proliferation and increases the expression of CCL2, CXCL1, and CXCL2. Our work reveals the oncogenic roles of Fn and suggests that Fn could be a potential diagnostic and therapeutic target for glioma patients. Overall, a multi-omics analysis of human glioma tissue showed that the intratumoral microbiome may affect the expression of neuron-related genes through bacteria-associated metabolites. Both in vivo and in vitro experiments demonstrated that Fn, as a key bacterium enriched in glioma tissue, promotes glioma proliferation and increases the expression of CCL2, CXCL1, and CXCL2. Our work reveals the oncogenic roles of Fn and suggests that Fn could be a potential diagnostic and therapeutic target for glioma patients. |
Patient‐Reported Outcomes Following Systemic Antibiotic Adjunct to Nonsurgical Treatment of Periodontitis: A Randomized Controlled Clinical Trial | f6070d0c-0d6f-4515-a797-35e5e2a369aa | 11726368 | Dentistry[mh] | Background Periodontal diseases, which include gingivitis and periodontitis, are highly prevalent in adults and are mainly caused by intraoral biofilms containing periodontal pathogenic bacteria such as Porphyromonas gingivalis and Prevotella intermedia (Trindade et al. ). Although periodontitis is considered a “silent disease,” it causes swelling, bleeding, pain, and tooth loss (Buset et al. ). Since periodontitis is often infectious and dependent on pathogenic microorganisms and most patients cannot mechanically remove microbial plaque, clinical interventions are recommended for more extended periodontal survival (De la Rosa et al. ; MacGregor, Rugg‐Gunnand, and Gordon ; Kocher, Tersic‐Orth, and Plagmann ). Supra‐ and subgingival instrumentation, various types of surgical interventions, patient motivation, and re‐instruction during the maintenance phase of treatment can all be part of periodontitis treatment (Herrera et al. ). Based on the microbial etiology of periodontitis, systemic administration of antibiotics is considered an adjunct to controlling bacterial infections (Barça, Çifçibaşı, and Çintan ). According to reports, using a combination of Metronidazole (250 mg) and Amoxicillin (500 mg) three times a day for 7 days has been considered a sensible choice. However, it is deemed ineffective without mechanical treatments (Heitz‐Mayfield ). In recent decades, healthcare systems have increasingly acknowledged the critical role of patients' perspectives in ensuring the delivery of high‐quality, equitable, and safe services. Central to this paradigm shift has been the growing integration of patient‐reported outcomes (PROs), which provide direct insights into patients' perceptions and are now recognized as essential metrics for improving care. PROs have gained significant attention in clinical dentistry, reflecting a shift towards patient‐centered care. This trend emphasizes the importance of understanding how oral health impacts overall well‐being, beyond just clinical measures like DMFT or periodontal pocket depth. Researchers are increasingly focusing on the psychosocial effects of oral conditions, such as pain, appearance, and social interactions. As a result, PROs have become key metrics for evaluating treatment outcomes and improving patient care in both clinical and public health settings (Williams et al. ; Sischo and Broder ; Yu et al. ). A key area where patient‐reported outcomes have become particularly valuable is in assessing Oral Health–Related Quality of Life (OHRQoL). OHRQoL is a multidimensional construct that reflects the impact of oral health on an individual's overall well‐being, including physical, emotional, and social dimensions. It goes beyond clinical indicators to capture the subjective experience of patients, such as pain, discomfort, and functional limitations related to oral health (Rothman et al. ). Assessing the OHRQoL can help develop policies and interventions to improve patients' health (Nagarajan and Chandra ). Accordingly, previous studies have demonstrated that periodontal treatments can significantly enhance the quality of life in patients suffering from periodontitis (Needleman et al. ; John ; Cunha‐Cruz, Hujoel, and Kressin ; Ng and Leung ). Besides, it is confirmed that subgingival scaling and root planning significantly impact OHRQoL more than supragingival scaling (Goel and Baral ). Various instruments have been validated to evaluate the OHRQoL, including the short and long versions of Oral Health Impact Profile‐14 and ‐49 (OHIP‐14 and OHIP‐49), Oral Impact on Daily Performance (OIDP), OHQoL‐UK(W), a conceptual instrument for pregnant women, etc. (McGrath and Bedi ; Slade , ; Åstrøm and Okullo ; Fakheran et al. ). OHIP‐14 has been validated and is reliable for the Iranian population. It is a multidimensional measurement tool that examines the cultural, social, and functional aspects of quality of life (Navabi, Nakhaee, and Mirzadeh ). Although fundamental concerns about the excessive use of antibiotics leading to the emergence of antibiotic resistance should be considered, we should not deprive patients of logical and tangible outcomes of antibiotic therapy (Loos and Needleman ). It should be noted that the absence of evidence does not mean that antibiotics cannot improve the patient's quality of life. Several international health policy and regulatory organizations have acknowledged the importance of PROs. If collected in a manner that adheres to scientific rigor, the results of studies focusing on patient‐centered outcomes have the potential to impact healthcare policy, pharmaceutical labeling claims, and clinical practice guidelines. Based on the European Federation of Periodontology (EFP) S3‐level clinical practice guideline, the adjunctive use of specific systemic antibiotics can be considered in the treatment of generalized Stage III periodontitis in young adults. Accordingly, in this randomized controlled clinical trial, we aimed to evaluate the impact of systemic adjunctive antibiotic administration following subgingival instrumentation on the oral health‐related quality of life (OHRQoL) of young patients (≤ 40 years old) diagnosed with generalized Stage III periodontitis. To our knowledge, this is the first controlled clinical trial addressing this specific topic. Materials and Methods 2.1 Clinical Trial Design This prospective, randomized, controlled trial involved two parallel groups and included 70 patients undergoing nonsurgical periodontal treatment. This study received approval from the Ethics Committee of Isfahan University of Medical Sciences and was registered with the IRCT20201221049786N1 Registry of Clinical Trials on 13/02/2021. It adhered to the principles of the Declaration of Helsinki and followed CONSORT guidelines. Additionally, the Sex and Gender Equity in Research (SAGER) guidelines were adhered to in this investigation, and all participants provided written informed consent before their involvement. Based on similar previous studies, to achieve 80% test power, 25 patients were required for each method to identify significant differences in median values at a 5% level ( d = 0.7) (Navabi, Nakhaee, and Mirzadeh ; Pakpour et al. ; Hajian‐Tilaki et al. ). Since this study was organized during the COVID‐19 pandemic and there was a high possibility of patients not following the treatment protocol, we decided to include a significantly more significant number of participants. The ethics committee also approved this consideration. For this reason, we finally had 35 patients in each study group. The inclusion criteria were adult (≤ 18 and ≤ 40 years old) patients with generalized stage III periodontitis where clinical attachment loss was more than 3 mm in more than 30% of the remaining teeth and having at least 16 natural teeth (excluding third molars). Participants had no other oral diseases, except for periodontitis, such as decayed teeth, pericoronitis, soft tissue lesions, or malocclusion, that required treatment within the next 3 months. The exclusion criteria encompassed individuals with a prior allergic reaction to Penicillin or Metronidazole, those currently taking antibiotics, and individuals with systemic conditions like diabetes, HIV/AIDS, liver disorders, chronic renal failure, and autoimmune diseases. Also excluded were patients undergoing periodontal treatment in the preceding 6 weeks and individuals who were concurrently pregnant or breastfeeding. The reasons for excluding patients after inclusion in the study were the use of extra antibiotics, withdrawal of the process, and receiving any other dental treatment during the follow‐up period. These patients were selected by a periodontist and treated at the Periodontal Department of Isfahan University of Medical Sciences between February 2021 and April 2022. After providing written informed consent, patients were randomly assigned to either the control (Placebo) or the test groups (Amoxicillin 500 mg and Metronidazole 250 mg, three times a day) for 7 days using computer‐generated randomization codes concealed in opaque, sealed envelopes. This study's randomization and subsequent allocation process involved a dental intern who prepared 70 medication packets. Each packet was assigned a unique numeric code ranging from 01 to 70, which was then printed on the labels of the respective packages. To ensure blinding, the medication packets were concealed with their numeric codes. When it came time to distribute the medication packets to eligible patients, they were given out in a numeric order. This means the patients received the medication based on the assigned numeric codes without knowing the contents. This process ensured that the distribution of the packets was done randomly and in a manner that maintained the blinding of both the dental intern and the patients. Hence, the investigator was blind to the treatment assigned to patients. The placebo was an inactive substance made by the Pharmacy School of Isfahan University of Medical Sciences and bore a resemblance to the primary drug. Throughout the course of the medication period, an assistant called the patients three times a week to check on compliance with the consumption of antibiotics and placebos. After the medication week, the individuals were requested to return the bottles so that they could be examined for any leftover antibiotic or placebo tablets. 2.2 Evaluation of Efficacy The clinical examination of each patient included Pocket Probing Depth (PPD), Bleeding On Probing (BOP), Clinical Attachment Loss (CAL), and Gingival Index (GI). For examiner calibration, a Kappa coefficient of 0.85 or higher was used. Ten patients, each having at least five teeth with PPD and CAL of 5 mm or more at proximal sites, were selected. Each patient underwent two examinations, with a 48‐h interval between the first and second assessments (Kappa = 0.89). PPD was measured using a HuFriedy PCP UNC 15 probe at four points around each tooth, including distobuccal/distolabial, mid buccal/midlabial, mesiobuccal/mesiolabial, palatal/lingual surfaces. The deepest PPD measurement was recorded for statistical analysis. CAL was calculated by adding the probing depth to the gingival margin at mid buccal/midlabial level, and the greatest CAL was the final record. The probe was gently inserted and moved around the teeth to test BOP. Then, 30 s after probing, the presence of BOP was determined, and the percentage of bleeding sites was scored. Based on the presence or absence of BOP for surfaces of all teeth, the GI was scored on a 0 –3 scale, with 0 indicating normal gingiva and 3 showing severe inflammation, redness, edema, spontaneous bleeding, and ulceration. These measurements were repeated at 1‐month and 3‐month recalls. A single dental intern blinded to the intervention recorded all the measures at all time points. The same periodontist performed full‐mouth scaling and root planning using Cavitron Ultrasound Dental Scaling. Then, hand instrumentation was employed using a subgingival curette (Gracey curette SG 11/12, 5/6, or 13/14) to remove the deposits from the root and tooth surface for all patients at baseline. Additionally, all participants were given exact oral hygiene instructions (flossing and brushing in gentle circular strokes), fluoride‐rich toothpaste, dental floss, and medium‐bristle toothbrushes. Patients, operative investigator, and nonoperative investigator were all blinded to the medication received after the treatment procedure. For the assessment of OHRQoL, each participant was asked to fill in the OHIP‐14 questionnaire at baseline and 1 month and 3 months later. The validity and reliability of the Persian version of OHIP‐14 had been previously confirmed among 400 participants (Cronbach's alpha = 0.85) (Navabi, Nakhaee, and Mirzadeh ). The patient's assessment was documented across seven categories: functional restrictions, physical unease, emotional distress, physical impairment, mental impairment, social limitations, and handicap in this survey. The scoring scale for this questionnaire spans from 0 to 56, with higher scores indicating a more unfavorable condition. 2.3 Statistical Analysis Data analysis was performed using IBM SPSS statistical software, version 22.0, developed by IBM Corp. in Armonk, NY, USA. The initial data comparison utilized the Kruskal–Wallis test. Subsequently, the t ‐test was employed to compare paired results across various groups. A significance level of p < 0.05 was deemed statistically meaningful. Clinical Trial Design This prospective, randomized, controlled trial involved two parallel groups and included 70 patients undergoing nonsurgical periodontal treatment. This study received approval from the Ethics Committee of Isfahan University of Medical Sciences and was registered with the IRCT20201221049786N1 Registry of Clinical Trials on 13/02/2021. It adhered to the principles of the Declaration of Helsinki and followed CONSORT guidelines. Additionally, the Sex and Gender Equity in Research (SAGER) guidelines were adhered to in this investigation, and all participants provided written informed consent before their involvement. Based on similar previous studies, to achieve 80% test power, 25 patients were required for each method to identify significant differences in median values at a 5% level ( d = 0.7) (Navabi, Nakhaee, and Mirzadeh ; Pakpour et al. ; Hajian‐Tilaki et al. ). Since this study was organized during the COVID‐19 pandemic and there was a high possibility of patients not following the treatment protocol, we decided to include a significantly more significant number of participants. The ethics committee also approved this consideration. For this reason, we finally had 35 patients in each study group. The inclusion criteria were adult (≤ 18 and ≤ 40 years old) patients with generalized stage III periodontitis where clinical attachment loss was more than 3 mm in more than 30% of the remaining teeth and having at least 16 natural teeth (excluding third molars). Participants had no other oral diseases, except for periodontitis, such as decayed teeth, pericoronitis, soft tissue lesions, or malocclusion, that required treatment within the next 3 months. The exclusion criteria encompassed individuals with a prior allergic reaction to Penicillin or Metronidazole, those currently taking antibiotics, and individuals with systemic conditions like diabetes, HIV/AIDS, liver disorders, chronic renal failure, and autoimmune diseases. Also excluded were patients undergoing periodontal treatment in the preceding 6 weeks and individuals who were concurrently pregnant or breastfeeding. The reasons for excluding patients after inclusion in the study were the use of extra antibiotics, withdrawal of the process, and receiving any other dental treatment during the follow‐up period. These patients were selected by a periodontist and treated at the Periodontal Department of Isfahan University of Medical Sciences between February 2021 and April 2022. After providing written informed consent, patients were randomly assigned to either the control (Placebo) or the test groups (Amoxicillin 500 mg and Metronidazole 250 mg, three times a day) for 7 days using computer‐generated randomization codes concealed in opaque, sealed envelopes. This study's randomization and subsequent allocation process involved a dental intern who prepared 70 medication packets. Each packet was assigned a unique numeric code ranging from 01 to 70, which was then printed on the labels of the respective packages. To ensure blinding, the medication packets were concealed with their numeric codes. When it came time to distribute the medication packets to eligible patients, they were given out in a numeric order. This means the patients received the medication based on the assigned numeric codes without knowing the contents. This process ensured that the distribution of the packets was done randomly and in a manner that maintained the blinding of both the dental intern and the patients. Hence, the investigator was blind to the treatment assigned to patients. The placebo was an inactive substance made by the Pharmacy School of Isfahan University of Medical Sciences and bore a resemblance to the primary drug. Throughout the course of the medication period, an assistant called the patients three times a week to check on compliance with the consumption of antibiotics and placebos. After the medication week, the individuals were requested to return the bottles so that they could be examined for any leftover antibiotic or placebo tablets. Evaluation of Efficacy The clinical examination of each patient included Pocket Probing Depth (PPD), Bleeding On Probing (BOP), Clinical Attachment Loss (CAL), and Gingival Index (GI). For examiner calibration, a Kappa coefficient of 0.85 or higher was used. Ten patients, each having at least five teeth with PPD and CAL of 5 mm or more at proximal sites, were selected. Each patient underwent two examinations, with a 48‐h interval between the first and second assessments (Kappa = 0.89). PPD was measured using a HuFriedy PCP UNC 15 probe at four points around each tooth, including distobuccal/distolabial, mid buccal/midlabial, mesiobuccal/mesiolabial, palatal/lingual surfaces. The deepest PPD measurement was recorded for statistical analysis. CAL was calculated by adding the probing depth to the gingival margin at mid buccal/midlabial level, and the greatest CAL was the final record. The probe was gently inserted and moved around the teeth to test BOP. Then, 30 s after probing, the presence of BOP was determined, and the percentage of bleeding sites was scored. Based on the presence or absence of BOP for surfaces of all teeth, the GI was scored on a 0 –3 scale, with 0 indicating normal gingiva and 3 showing severe inflammation, redness, edema, spontaneous bleeding, and ulceration. These measurements were repeated at 1‐month and 3‐month recalls. A single dental intern blinded to the intervention recorded all the measures at all time points. The same periodontist performed full‐mouth scaling and root planning using Cavitron Ultrasound Dental Scaling. Then, hand instrumentation was employed using a subgingival curette (Gracey curette SG 11/12, 5/6, or 13/14) to remove the deposits from the root and tooth surface for all patients at baseline. Additionally, all participants were given exact oral hygiene instructions (flossing and brushing in gentle circular strokes), fluoride‐rich toothpaste, dental floss, and medium‐bristle toothbrushes. Patients, operative investigator, and nonoperative investigator were all blinded to the medication received after the treatment procedure. For the assessment of OHRQoL, each participant was asked to fill in the OHIP‐14 questionnaire at baseline and 1 month and 3 months later. The validity and reliability of the Persian version of OHIP‐14 had been previously confirmed among 400 participants (Cronbach's alpha = 0.85) (Navabi, Nakhaee, and Mirzadeh ). The patient's assessment was documented across seven categories: functional restrictions, physical unease, emotional distress, physical impairment, mental impairment, social limitations, and handicap in this survey. The scoring scale for this questionnaire spans from 0 to 56, with higher scores indicating a more unfavorable condition. Statistical Analysis Data analysis was performed using IBM SPSS statistical software, version 22.0, developed by IBM Corp. in Armonk, NY, USA. The initial data comparison utilized the Kruskal–Wallis test. Subsequently, the t ‐test was employed to compare paired results across various groups. A significance level of p < 0.05 was deemed statistically meaningful. Results In the test group, three patients did not answer the phone in the follow‐up period, and two patients informed us of bloating after taking the medication. In the control group, two patients did not answer the phone in the follow‐up period, and two forgot to take the medication, so they were excluded. Finally, the control group consisted of 16 (51.6%) men and 15 (48.4%) women with the mean age of 37.03 ± 11.02 years, while the test group included 16 (53.3%) men and 14 (46.7%) women with the mean age of 38.37 ± 10.63 years (Table , Figure ). There was no significant difference between the two groups regarding age, gender, education level, and economic status There was no significant difference between the two groups regarding the mean CAL, PPD, and GI at the beginning of the study and 1 and 3 months after the intervention ( p value > 0.05). However, CAL and PPD in the test group and GI in both groups decreased significantly within 3 months ( p value < 0.05). Moreover, the occurrence of bleeding on probing in the control group with 41.9% was far more than its occurrence in the test group with 10% 3 months after the intervention ( p value = 0.008). In addition, although the mean of bleeding between the two groups was not significantly different in any of the studied times ( p value > 0.05), a significant decrease in bleeding was observed in both groups 3 months after the intervention ( p value < 0.05) (Table ). Furthermore, the evaluation of OHRQoL indicated no significant difference in the mean OHRQoL score between the two groups at the beginning of the study ( p value > 0.05). However, the OHRQoL scores of the test group, with the mean scores of 8.7 ± 3.80 and 7.2 ± 3.21 1 month and 3 months after the intervention, respectively, were significantly lower than those of the control group, with the mean scores of 11.5 ± 4.59 and 9.5 ± 3.67, respectively ( p value < 0.05). Although the mentioned decrease was not statistically significant ( p value = 0.070), the OHRQoL score decreased significantly in the test group ( p value < 0.001). The examination of each OHRQoL dimension revealed that physical pain, physical disability, and overall disability (Handicap) 1 and 3 months after the intervention and psychological discomfort only 3 months after the intervention were significantly lower in the test group as compared with the control group ( p value < 0.001) (Table ). Finally, the evaluation of the relationship between OHRQoL and changes in CAL, PPD, and BOP indices in each of the studied groups showed that only PPD with correlation coefficients of 0.396 and 0.407 in the control and test groups, respectively, had a direct and significant relationship with OHRQoL ( p value < 0.05) (Table ). Discussion The present study aimed to evaluate the effect of adjunctive antibiotics systemically administered to subgingival instrumentation on the OHRQoL of patients with periodontitis. Since there was no evidence in the literature, we conducted this trial on the participants with stage III periodontitis. Based on the evidence for therapeutic management of periodontitis, mechanical debridement should be accompanied by antibiotic therapy to kill the remaining subgingival pathogens after conventional periodontal therapy (Winkelhoff, Rams, and Slots ). It is difficult to definitively decide about the usefulness of antibiotic therapy except for patients with deep pockets, progressive disease, or specific microbial patterns (Herrera et al. ). In contrast to previous studies, Hayes et al. showed that systemic tetracycline is not more beneficial than mechanical treatment alone (Hayes, Antczak‐Bouckoms, and Burdick ). Moreover, using antibiotics in patients with pockets of less than 4 mm is a highly controversial subject among studies (Elter et al. ). It is essential that all antibiotics can give rise to gastrointestinal effects (e.g., nausea, vomiting, diarrhea, abdominal pain, loss of appetite, bloating), and Amoxicillin, in particular, can cause secondary overgrowth of Candida species and Clostridium difficile infections (Mohsen, Dickinson, and Somayaji ). Due to the adverse effects and bacterial resistance, it is essential to consider the logical selection of antibiotics to optimize medication efficacy (Keestra et al. ; Kapoor et al. ). The adjunctive use of 250 mg Metronidazole plus 500 mg Amoxicillin (three times a day) 7 days after nonsurgical periodontal therapy showed statistically significant and clinically relevant benefits for periodontitis patients (McGowan, McGowan, and Ivanovski ; Borges et al. ). The evaluation of treatment outcomes has traditionally been dominated by objective clinical outcome measures that, although necessary, cannot demonstrate the patients' perception and priorities regarding the treatment (Loos and Needleman ). Hence, prominent international health policy, regulatory bodies, and patients acknowledge the significance of patient‐reported outcomes (Patrick et al. ; Doward, Gnanasakthy, and Baker ). During the last decade, researchers have focused on measuring the quality of life besides clinical indices such as PPD, CAL, and BOP due to the importance of this issue. Although there is no final definition for OHRQoL, it is described as the subjective evaluation of a person's satisfaction with care and self‐concept (Sischo and Broder ). It is improbable for medical technology to capture all the information regarding treatment or the illness; this type of data can be acquired solely from the patient. Consequently, a growing focus is on assessing patient‐reported outcomes (Chin and Lee ). Furthermore, in some diseases, patient‐reported outcome measurements are more valuable and play a key role, especially when the primary objective of the treatment is not solely centered on survival (Singh ; Deshpande et al. ). As a result of greater attention to OHRQoL, various measurement tools have been developed, such as OHIP‐14, a shorter version of the OHIP‐49 questionnaire (Slade ; Organization ). The OIDP scale assesses the effect on individuals' daily lives and is convenient for use in population surveys (Åstrøm and Okullo ; Adulyanon, Vourapukjaru, and Sheiham ). Robinson et al. compared the concurrent validity of OHIP‐14 and OIDP. Based on the results, although completion rates were similar, OIDP indicated a total score due to the severe skewness of the data (Robinson et al. ). Hence, OHIP‐14 was considered the instrument of choice to assess OHRQoL in this study. Locker et al. reported a five‐score reduction in the mean OHIP‐14 score, representing a significant difference (Locker and Allen ). In the present study, the mean OHIP‐14 decreased from 20.93 ± 8.32 at baseline to 8.70 ± 3.80 after 1 month and 7.27 ± 3.21 after 3 months of follow‐up in the test group, which was statistically significant. In the control group, it decreased from 19.29 ± 8.63 at baseline to 11.52 ± 4.59 and 9.52 ± 3.67 1 month and 3 months after intervention, respectively, which was not statistically significant compared to the test group. Bery et al. investigated the association between oral health status and HQoL. They reported a mean OHIP‐14 score of 8.6 for subjects over 15 years (mean age of 34.7), which is undeniably lower than the mean score obtained in the present study at baseline. The variation could be attributed to diverse economic and cultural backgrounds (Bery et al. ). The results of this study were in line with other studies in that physical pain and psychological discomfort indicated the highest scores using OHIP‐14 (Meusel et al. ; Mendez et al. ; Sonnenschein et al. ). Wong et al. reported a significant reduction in mean OHIP‐14 score (from 17 at baseline to 14 after 6 months of follow‐up) after nonsurgical periodontal treatment, which is in line with the results of another study that showed nonsurgical periodontal treatment might improve the patients' OHRQoL (Åslund et al. ; Wong et al. ). Following the most recent clinical practice guideline issued by the EFP, they do not recommend the routine use of antibiotics administered systemically alongside subgingival instrumentation for patients with periodontitis (Sanz et al. ). The basis for this recommendation stems from a high‐quality systematic review published in 2020 (Teughels et al. ). In this review, the authors analyzed 34 articles (of which 28 were clinical trials) to assess the clinical effectiveness of systemic antimicrobial agents used adjunctively in periodontitis patients. Interestingly, none of the studies included in this review addressed aspects like patients' OHRQoL or their satisfaction with the treatment process and outcomes (Teughels et al. ). Consequently, due to the lack of such data, the EFP couldn't incorporate patient‐reported results into their clinical practice recommendations for antibiotic use in managing periodontitis Stages I–III (Sanz et al. ). Furthermore, another consensus report by the authors revealed a similar gap, indicating no available data on how the systemic administration of antibiotics during nonsurgical periodontal therapy impacts OHRQoL (Pretzl et al. ). To the best of our knowledge, the impact of adjunctive antibiotics on OHRQoL after nonsurgical periodontal therapy in conjunction with clinical outcomes has yet to be studied, and this study was the first randomized controlled trial to investigate this issue based on the EFP S3‐level clinical practice guideline. Various studies have focused on factors that affect the OHRQoL of patients with periodontitis, including untreated dental decay, destroyed root surfaces, malocclusion, social and emotional states, etc (Sischo and Broder ; Broder, Wilson‐Genderson, and Sischo ; Andersson et al. ). Additionally, Peikert et al. suggested that adjunctive use of antibiotics could positively affect OHRQoL. Nevertheless, this study was not controlled, and given the limitations, the results should be interpreted with caution (Peikert et al. ). Besides, Harks et al. evaluated the effect of antibiotics on OHRQoL as a secondary outcome, but there was no correlation between the studied clinical parameters and OHQoL (Harks et al. ). In line with the present results, other studies have shown a direct association between PPD and OHRQoL, i.e., patients with a lower PPD underwent significant improvement in OHRQoL (Needleman et al. ; Brauchle, Noack, and Reich ). Theodoridis et al. evaluated the impact of surgical and nonsurgical periodontal treatments on OHQoL in conjunction with clinical parameters in Greek adults. Although there was a significant improvement in clinical parameters and OHRQoL after nonsurgical periodontal therapy, surgical treatments did not improve OHRQoL. Moreover, in contrast to the results of the present study, no correlation was found between OHRQoL and clinical parameters. This study was uncontrolled, and the different methodologies adopted can explain this contrast (Theodoridis et al. ). In addition, some other studies have confirmed that the correlation between clinical parameters and OHRQoL is questionable (Peikert et al. ). Different intervals between follow‐ups and disparate age groups and using different indices offer a rational explanation for these results (Andersson et al. ; Gil‐Montoya et al. ; Saito et al. ). The assessment of antibiotics' effect on clinical indicators revealed that, in the test group, the average BOP decreased significantly after 3 months compared to the control group, aligning with results from prior studies (Keestra et al. ; Haffajee, Socransky, and Gunsolley ). In line with previous research, the test group's average CAL and PPD scores showed significant improvement compared to the control group. However, it is worth noting that our study had a smaller participant pool, which may explain the contrasting outcomes observed in this research (Keestra et al. ; Harks et al. ; Haffajee, Socransky, and Gunsolley ). In conclusion, while this study highlights the potential benefits of systemic administration of Amoxicillin 500 mg and Metronidazole 250 mg, three times a day, as adjunctive therapy to periodontal mechanical treatment, the findings must be interpreted within the limitations of the study design. The primary limitation of this study is the relatively short follow‐up period, which was necessitated by the unplanned suspension of activities during the COVID‐19 pandemic. A longer‐term follow‐up of these patients would be crucial to assess whether the additional benefits observed with antibiotic therapy, when used as an adjunct to subgingival instrumentation, are maintained over time. Extending the duration of the study would provide a more comprehensive understanding of the sustainability of these therapeutic effects and could offer valuable insights into the long‐term efficacy of this combined treatment approach. Furthermore, the OHIP‐14 questionnaire, while widely used to assess oral health‐related quality of life, has inherent limitations (Campos et al. ). Being a self‐reported measure, it is subject to individual biases, including patients' subjective interpretation of their symptoms and personal expectations of treatment outcomes. These biases can influence the accuracy and reliability of the data, especially in studies evaluating clinical interventions. Additionally, the OHIP‐14 may not fully capture the nuances of specific clinical conditions or the long‐term effects of therapeutic interventions, which could lead to an underestimation or overestimation of treatment impacts. Therefore, future studies should take these limitations into account by incorporating more objective clinical outcomes alongside patient‐reported measures. It is important for researchers and clinicians to be cautious when interpreting self‐reported outcomes and to consider supplementing the OHIP‐14 with additional tools or extended follow‐up periods to achieve a more comprehensive understanding of the long‐term effects of treatment. Parastoo Parhizkar: conceptualization, methodology, formal analysis, investigation, software, writing–original draft. Jaber Yaghini: conceptualization, methodology, formal analysis, investigation, writing–review and editing, supervision. Omid Fakheran: conceptualization, formal analysis, investigation, writing–review and editing, supervision. All authors read and approved the final version of the article. This study received approval from the Ethics Committee of Isfahan University of Medical Sciences (IR. MUIRESEARCH. REC.1399.572). Written informed consent was obtained from all participants. The authors declare no conflicts of interest. |
Re-evaluating the Morse Fall Scale in obstetrics and gynecology wards and determining optimal cut-off scores for enhanced risk assessment: A retrospective survey | 455d61f8-702e-403b-8d15-3a547072dac2 | 11376562 | Gynaecology[mh] | Falls pose a pervasive and significant threat to patient safety, particularly among inpatients. Statistics suggest that more than one million patients experience falls in hospitals annually . The prevalence and impact of falls in hospital settings are widely acknowledged due to the potential for prolonged hospital stays, physical injuries, and, in severe cases, fatalities . Globally, falls have emerged as a representative benchmark for assessing nursing quality . Addressing and preventing accidental falls in patients consistently remains a critical aspect of safety and quality management. A pivotal strategy in fall prevention involves the proactive identification of patients at risk of falling. Employing appropriate fall assessment tools is crucial in aiding nurses to promptly and accurately gauge the level of fall risk, facilitating the targeted implementation of preventive measures. Currently, there exist several fall risk assessment tools designed to identify the risk of falls in patients. In numerous research studies involving diverse populations, validation has been established for tools such as the St Thomas’s Risk Assessment Tool in Falling Elderly Inpatients (STRATIFY), Morse Fall Scale (MFS), and Hendrich Fall Risk Model (HFRM) . Despite these tools demonstrating moderate-to-good predictive validity and reliability across various studies, divergent views have been presented. A previous systematic review on fall-risk screening concluded that due to the multidimensional nature of fall risk, there is no universally "ideal" tool applicable in every context or capable of performing flawless risk assessments. While fall risk assessment tools demonstrated high accuracy in developmental settings, their effectiveness diminished in other care contexts, as suggested by Meekes et al . Therefore, a reevaluation of the tools mentioned above is necessary when applied in different care contexts . The MFS was meticulously developed as a fall risk assessment tool through a rigorous design process by Morse, Black, et al. . Comprising six scored items, the MFS assesses a patient’s likelihood of falling based on specific criteria: a history of falling (0 = No, 25 = Yes), the presence of a secondary diagnosis (0 = No, 15 = Yes), the use of ambulatory aids like a cane, wheelchair, or walking frame (0 = None, 15 = wheelchair/bed rest, 30 = nurse assist), the administration of intravenous therapy (0 = No, 20 = Yes), types of gait (0 = normal, 10 = bed rest, 20 = immobile), and mental status (0 = oriented to own ability, 15 = cannot orient to own ability). The scores on the MFS range from 0 to 125, with higher scores indicating a greater likelihood of falling . According to Morse, the MFS demonstrated a sensitivity of 78%, a positive predictive value (PPV) of 10.3%, a specificity of 83%, and a high negative predictive value (NPV) of 99.2% . Morse recommended a cut-off score of 45 for optimal use in long-term care wards, chronic disease wards, and emergency wards . While the MFS has demonstrated accuracy in assessing fall risk within its developmental environment, scholars have observed variations in its performance across different patient groups . According to Young Ju Kim , in an acute care setting, the MFS exhibited a sensitivity of 85.7% and a specificity of 58.8% when the cut-off score was set at 50. However, studies by Urbanetto reported that the Brazilian version of the MFS (MFS-B) exhibited only moderate reliability in predicting a patient’s risk of falling . The best estimate to predict falls was found to be at the cut-off score of 44.78 for the average MFS-B score, with a sensitivity of 95.2% and a specificity of 64%. These findings suggest that the MFS performance varies across scenarios, emphasizing the need to adjust the cut-off score to align with specific patient populations and care contexts. Therefore, a reevaluation of MFS is necessary when applied in different care settings. Currently, the MFS is extensively utilized within Chinese hospitals, particularly in obstetrics and gynecology wards. However, literature regarding its application in this specific setting remains scarce, with most studies focusing on emergency, surgical, and rehabilitation wards . In the study conducted by Morse and her colleagues, the MFS was developed across various units within two institutions. These units included six from the acute care division (comprising general surgical, ophthalmology, and three medical units), two from the long-term care division (psycho-geriatric and nursing home), and eight adult units from a rehabilitation hospital. Notably, none of these units encompassed obstetrics and gynecology departments. It is essential to recognize obstetrics and gynecology as a distinct discipline separate from internal medicine and surgery. Patients within this specialty are exclusively female, with care centered on reproductive and gynecological concerns. Surgical interventions commonly performed include extensive hysterectomy, curettage, cesarean section, laparoscopic surgery, management of menstrual disorders, miscarriage prevention, infertility, gynecological infections, and childbirth assistance. Typically, obstetrics and gynecology inpatients are under 65 years old and in the reproductive stage, exhibiting superior muscle strength and physiological functionality compared to elderly patients. Obstetrics and gynecology inpatients possess unique characteristics compared to those in rehabilitation facilities and geriatric wards where the MFS was initially developed and tested. Therefore, concerns have been raised about the applicability of MFS in obstetrics and gynecology wards. These considerations prompt us to question, how effective is MFS in obstetrics and gynecology wards. whether the optimal cut-off score for the MFS in obstetrics and gynecology wards would deviate from the scores reported in previous studies. Therefore, it is very urgent to re-evaluate the effectiveness of MFS in obstetrics and gynecology wards and determine its optimal cut-off value to ensure the maximum utility of MFS application in obstetrics and gynecology wards. The primary aim of this study is to examine the validity of the MFS in the context of obstetrics and gynecology wards. Additionally, to determine the optimal cut-off score for the MFS. This determination will be accomplished through a thorough analysis of essential metrics, including sensitivity, specificity, accuracy, the area under the Receiver Operating Characteristic curve (AUC), Youden index, Positive Predictive Value (PPV), Negative Predictive Value (NPV), and the Kappa index. 3.1. Aim To examine the validity of the MFS by analyzing the electronic medical records on fall risk in obstetrics and gynecology wards and determine the optimal cut-off score of MFS. 3.2 Study design This study was a retrospective survey designed, it was conducted from September to December 2022. 3.3 Definition of falls A patient fall is defined as an unplanned descent to the ground, with or without resulting injury . Our assessment includes various fall scenarios, incorporating assisted falls (pre-falls), where the descent is mitigated by prompt intervention from nurses or caregivers, thereby reducing the likelihood of injury, whether due to physiological or environmental factors. 3.4 Sample size calculation Following Yung Hee Sung , who suggested 84 subjects for each group, we calculated the sample size using the formula: n = (Zα/δ) 2 p*(1-p) budget sample size, where α = 0.05, Zα = 1.96, and δ = 0.075. According to our pre-test, sensitivity (pse = 0.75), specificity (psp = 0.78), and the sample size were calculated as follows: 117 cases for the no-fall group and 128 for the fall group. 3.5. Setting and participants This study was conducted in an Obstetrics and Gynecology hospital and a general hospital, encompassing 8 obstetrical wards and 7 gynecological wards(did not encompass labor rooms or delivery units), during the period from January 1st, 2020, to July 10th, 2022. The average duration of stay in these wards was 4–5 days. A total of 63,568 patients met the inclusion criteria, comprising 136 fallers and 63,432 no-fallers. Fallers meeting the criteria were designated as the fall group. Each no-faller was assigned a sequential number, and then randomly selected 121 patients from this group using SPSS for inclusion in the control group. 1 patient was excluded from the analysis due to insufficient data, leaving 120 no-fall patients ultimately included in this study. 3.6. Data collection In this study, we collected data on the incidence of patient falls during hospitalization, extracting records from the hospital’s adverse-event reporting system. Demographic details were obtained from admission records, and clinical information was retrieved from the hospital’s electronic database. To prevent patient falls, the hospitals in this study routinely conducted fall risk assessments using the MFS. Nurses performed the initial assessment upon admission, revisiting it if there were changes in the patient’s medical condition. Nurses implemented preventive measures based on the MFS scores. For patients who experienced falls, we collected the MFS scores for the day of the fall event. For patients who did not experience falls, we recorded the maximum MFS score during the entire hospitalization period, as a higher MFS score indicates an elevated risk of falling. 3.7. Data analysis The data were processed using SPSS for Windows, version 12.0.1(SPSS Inc., Chicago, IL, USA). Descriptive statistics, including frequencies, percentages, means, and standard deviations, were employed to analyze the demographic and clinical characteristics of the subjects. Relationships between subjects and demographic and clinical characteristics were assessed using χ2 tests and t-tests. Additionally, the t-test was utilized for normally distributed data, the Mann-Whitney U test for non-normally distributed data, and the chi-square test for comparing categorical data. The optimal cut-off for the MFS was determined by analyzing its performance against the gold standard of patients who experienced falls during their hospitalization. This analysis included metrics such as the area under the Receiver Operating Characteristic curve (AUC), sensitivity, specificity, accuracy, Positive Predictive Value (PPV), Negative Predictive Value (NPV), and Kappa. 3.8. Ethical considerations This research received approval from the ethics committee overseeing human subjects at the hospital where the study was conducted. All participants in this study were adults. To ensure confidentiality and privacy, the data file was encrypted and accessible exclusively to the principal researchers. To examine the validity of the MFS by analyzing the electronic medical records on fall risk in obstetrics and gynecology wards and determine the optimal cut-off score of MFS. This study was a retrospective survey designed, it was conducted from September to December 2022. A patient fall is defined as an unplanned descent to the ground, with or without resulting injury . Our assessment includes various fall scenarios, incorporating assisted falls (pre-falls), where the descent is mitigated by prompt intervention from nurses or caregivers, thereby reducing the likelihood of injury, whether due to physiological or environmental factors. Following Yung Hee Sung , who suggested 84 subjects for each group, we calculated the sample size using the formula: n = (Zα/δ) 2 p*(1-p) budget sample size, where α = 0.05, Zα = 1.96, and δ = 0.075. According to our pre-test, sensitivity (pse = 0.75), specificity (psp = 0.78), and the sample size were calculated as follows: 117 cases for the no-fall group and 128 for the fall group. This study was conducted in an Obstetrics and Gynecology hospital and a general hospital, encompassing 8 obstetrical wards and 7 gynecological wards(did not encompass labor rooms or delivery units), during the period from January 1st, 2020, to July 10th, 2022. The average duration of stay in these wards was 4–5 days. A total of 63,568 patients met the inclusion criteria, comprising 136 fallers and 63,432 no-fallers. Fallers meeting the criteria were designated as the fall group. Each no-faller was assigned a sequential number, and then randomly selected 121 patients from this group using SPSS for inclusion in the control group. 1 patient was excluded from the analysis due to insufficient data, leaving 120 no-fall patients ultimately included in this study. In this study, we collected data on the incidence of patient falls during hospitalization, extracting records from the hospital’s adverse-event reporting system. Demographic details were obtained from admission records, and clinical information was retrieved from the hospital’s electronic database. To prevent patient falls, the hospitals in this study routinely conducted fall risk assessments using the MFS. Nurses performed the initial assessment upon admission, revisiting it if there were changes in the patient’s medical condition. Nurses implemented preventive measures based on the MFS scores. For patients who experienced falls, we collected the MFS scores for the day of the fall event. For patients who did not experience falls, we recorded the maximum MFS score during the entire hospitalization period, as a higher MFS score indicates an elevated risk of falling. The data were processed using SPSS for Windows, version 12.0.1(SPSS Inc., Chicago, IL, USA). Descriptive statistics, including frequencies, percentages, means, and standard deviations, were employed to analyze the demographic and clinical characteristics of the subjects. Relationships between subjects and demographic and clinical characteristics were assessed using χ2 tests and t-tests. Additionally, the t-test was utilized for normally distributed data, the Mann-Whitney U test for non-normally distributed data, and the chi-square test for comparing categorical data. The optimal cut-off for the MFS was determined by analyzing its performance against the gold standard of patients who experienced falls during their hospitalization. This analysis included metrics such as the area under the Receiver Operating Characteristic curve (AUC), sensitivity, specificity, accuracy, Positive Predictive Value (PPV), Negative Predictive Value (NPV), and Kappa. This research received approval from the ethics committee overseeing human subjects at the hospital where the study was conducted. All participants in this study were adults. To ensure confidentiality and privacy, the data file was encrypted and accessible exclusively to the principal researchers. 4.1 Characteristics of subjects A total of 136 inpatients who experienced falls and 120 inpatients who did not fall were included in this study. The cumulative inpatient days in the gynecology and obstetrics wards reached 269,533, resulting in a fall rate of 0.504 per 1000 patient days. presents the characteristics of the participants, categorizing them based on their falling status. Notably, all participants were female, had no delirium, and there were no significant differences observed in falls among individuals in gynecological and obstetric wards concerning age, education, marital status, smoking, and drinking habits, inpatients who underwent surgery were more prone to falls (χ2 = 23.993, P = 0.000). In the study, we examined the hospitalization reasons of two patient groups, encompassing induced 4 abortions, 178 childbirths(128 deliveries, 50 cesarean sections), 24 dysfunctional uterine bleeding, 25 hysteromyoma, 2 ovarian cysts, 4 perineal hematomas, 9 inflammatory diseases, 9 prolapse, and 1 miscarriage. Individuals with visual impairment or paralysis were excluded from the study. We also examined the utilization of sedatives or analgesics among inpatients. 85 parturients were administered analgesics during labor, with ropivacaine (Qilu Pharmaceutical Co., Ltd.) being the primary agent utilized. 88 surgical inpatients (50 cesarean sections and 38 gynecologic surgeries), patient-controlled analgesia pumps were employed within 48 hours postoperatively. The analgesics administered via these pumps included butorphanol (Jiangsu Hengrui Pharmaceuticals Co.,ltd.) and dexmedetomidine (Cisen Pharmaceutical Co., Ltd.). 5 surgical inpatients fell while receiving analgesics. When examining different periods, the number of falls exhibited variation (χ2 = 11.926, P = 0.003). The majority of falls occurred in the afternoon, with 64 (47.2%) falls happening between 12:00 and 18:00, followed by 39 (28.7%) falls between 06:00 4.2 Validity MFS score The MFS scores for the no-fallers ranged from 0 to 90, with a mean score of 31.71±17.156. Conversely, the MFS scores for the fallers ranged from 0 to 85, with a mean score of 54.18±21.06. Both groups exhibited skewed distributions (Z = 2.544, P = 0.000; Z = 2.126, P = 0.000). Upon comparing the scores between the two groups, the Fall group demonstrated significantly higher scores than the No-fall group (Z = 8.153, P <0.001). ROC of MFS shows the ROC curve of MFS. The AUC was 0.791±0.029 (AUC>0.5, P = 0.000) for the ROC curve, and the AUC 95% CI was (0.733, 0.848). Sensitivity, specificity, and Youden index We conducted tests using 5-point intervals, and increased the number of tests in the cut-off interval corresponding to the peak and sub-peak of the Youden index, to avoid omission. As detailed in , the cut-off scores were systematically assessed within the range of 15 to 80. Sensitivity showed a gradual decrease, and specificity increased as the cut-off score increased. The Youden index exhibited a peak at the cut-off score of 37.5–40, followed by a secondary peak at 52.5–55 owing to the absence of scores ranging from 36-39/41-44/46-49 in the database. Subsequently, the Youden index decreased with further increases in cut-off scores. AUC fluctuated between 0.500 and 0.772. The most substantial AUC, reaching 0.772, was observed at the cut-off score of 40, with notable AUC values of 0.763 and 0.761 at cut-off scores of 55 and 45, respectively. 4.3 The optimal cut-off score for the MFS As per the data presented in , a comparative analysis among cut-off scores of 40,45,50 and 55 reveals that the cut-off score of 40 yields superior results, evident in a higher Youden index (0.543), Kappa coefficient [(0.540, 95% confidence interval (95% CI) = 43.8%–64.3%], an acceptable sensitivity of 0.735 (95% CI = 59.8%–89.4%), fairly good specificity (0.808, 95% CI = 65.6%–98.6%), Positive Predictive Value (PPV) of 0.813 (95% CI = 66.2%–98.9%), Negative Predictive Value (NPV) of 0.729 (95% CI = 59.1%–89.0%), and an overall accuracy of 0.770 (95% CI = 66.6%–88.5%). A total of 136 inpatients who experienced falls and 120 inpatients who did not fall were included in this study. The cumulative inpatient days in the gynecology and obstetrics wards reached 269,533, resulting in a fall rate of 0.504 per 1000 patient days. presents the characteristics of the participants, categorizing them based on their falling status. Notably, all participants were female, had no delirium, and there were no significant differences observed in falls among individuals in gynecological and obstetric wards concerning age, education, marital status, smoking, and drinking habits, inpatients who underwent surgery were more prone to falls (χ2 = 23.993, P = 0.000). In the study, we examined the hospitalization reasons of two patient groups, encompassing induced 4 abortions, 178 childbirths(128 deliveries, 50 cesarean sections), 24 dysfunctional uterine bleeding, 25 hysteromyoma, 2 ovarian cysts, 4 perineal hematomas, 9 inflammatory diseases, 9 prolapse, and 1 miscarriage. Individuals with visual impairment or paralysis were excluded from the study. We also examined the utilization of sedatives or analgesics among inpatients. 85 parturients were administered analgesics during labor, with ropivacaine (Qilu Pharmaceutical Co., Ltd.) being the primary agent utilized. 88 surgical inpatients (50 cesarean sections and 38 gynecologic surgeries), patient-controlled analgesia pumps were employed within 48 hours postoperatively. The analgesics administered via these pumps included butorphanol (Jiangsu Hengrui Pharmaceuticals Co.,ltd.) and dexmedetomidine (Cisen Pharmaceutical Co., Ltd.). 5 surgical inpatients fell while receiving analgesics. When examining different periods, the number of falls exhibited variation (χ2 = 11.926, P = 0.003). The majority of falls occurred in the afternoon, with 64 (47.2%) falls happening between 12:00 and 18:00, followed by 39 (28.7%) falls between 06:00 MFS score The MFS scores for the no-fallers ranged from 0 to 90, with a mean score of 31.71±17.156. Conversely, the MFS scores for the fallers ranged from 0 to 85, with a mean score of 54.18±21.06. Both groups exhibited skewed distributions (Z = 2.544, P = 0.000; Z = 2.126, P = 0.000). Upon comparing the scores between the two groups, the Fall group demonstrated significantly higher scores than the No-fall group (Z = 8.153, P <0.001). ROC of MFS shows the ROC curve of MFS. The AUC was 0.791±0.029 (AUC>0.5, P = 0.000) for the ROC curve, and the AUC 95% CI was (0.733, 0.848). The MFS scores for the no-fallers ranged from 0 to 90, with a mean score of 31.71±17.156. Conversely, the MFS scores for the fallers ranged from 0 to 85, with a mean score of 54.18±21.06. Both groups exhibited skewed distributions (Z = 2.544, P = 0.000; Z = 2.126, P = 0.000). Upon comparing the scores between the two groups, the Fall group demonstrated significantly higher scores than the No-fall group (Z = 8.153, P <0.001). shows the ROC curve of MFS. The AUC was 0.791±0.029 (AUC>0.5, P = 0.000) for the ROC curve, and the AUC 95% CI was (0.733, 0.848). We conducted tests using 5-point intervals, and increased the number of tests in the cut-off interval corresponding to the peak and sub-peak of the Youden index, to avoid omission. As detailed in , the cut-off scores were systematically assessed within the range of 15 to 80. Sensitivity showed a gradual decrease, and specificity increased as the cut-off score increased. The Youden index exhibited a peak at the cut-off score of 37.5–40, followed by a secondary peak at 52.5–55 owing to the absence of scores ranging from 36-39/41-44/46-49 in the database. Subsequently, the Youden index decreased with further increases in cut-off scores. AUC fluctuated between 0.500 and 0.772. The most substantial AUC, reaching 0.772, was observed at the cut-off score of 40, with notable AUC values of 0.763 and 0.761 at cut-off scores of 55 and 45, respectively. As per the data presented in , a comparative analysis among cut-off scores of 40,45,50 and 55 reveals that the cut-off score of 40 yields superior results, evident in a higher Youden index (0.543), Kappa coefficient [(0.540, 95% confidence interval (95% CI) = 43.8%–64.3%], an acceptable sensitivity of 0.735 (95% CI = 59.8%–89.4%), fairly good specificity (0.808, 95% CI = 65.6%–98.6%), Positive Predictive Value (PPV) of 0.813 (95% CI = 66.2%–98.9%), Negative Predictive Value (NPV) of 0.729 (95% CI = 59.1%–89.0%), and an overall accuracy of 0.770 (95% CI = 66.6%–88.5%). This study represents the first application of the MFS in gynecological and obstetric wards in China, and direct comparisons with studies conducted in community and rehabilitation wards are currently unavailable. 5.1 Factors affecting falls in the obstetrics and gynecology ward Previous research consistently highlights the influence of physical impairments and weakness associated with aging, contributing to an elevated risk of falling and subsequent injuries . However, our investigation reveals no discernible distinctions in the demographic characteristics of patients between the fall and non-fall groups in gynecological and obstetric wards. This finding contrasts with numerous studies conducted in acute wards, reporting variations in falls based on gender and age . Consistent with previous studies , falls have been associated with surgery. We also observed five cases of falls among inpatients who underwent surgery and used pain-relief medications afterward. However, due to sample size limitations, we were unable to distinguish between the effects of surgery and pain-relief medications on falls. While insomnia and activity levels did not exhibit a notable association. This may be attributed to the distinctive characteristics of women in gynecology and obstetrics wards, predominantly young and fertile, displaying ample physical strength, unrestricted mobility, better sleep quality, and higher activity levels. This observation may explain the absence of age, activity level, and insomnia as discernible fall risk factors in obstetrics and gynecology. 5.2 The incidence of falls in the obstetrics and gynecology wards In this study, the observed fall rates in gynecology and obstetrics wards were notably lower at 0.504 per 1000 patient days compared to medical units in Boston and New York City using the Patient-Centered Fall-Prevention Tool Kit, where rates ranged from 2.92 to 2.49 per 1000 patient days . Furthermore, these rates were also lower than those reported in a surgical unit in the southeastern United States, which experienced an average monthly preintervention fall rate of 8.67 falls per 1000 patient days, decreasing to 5.07 falls postintervention . The lower decline rate in the obstetrics and gynecology wards in China may be influenced not only by the factors mentioned above but also by the practices of Chinese nursing staff. Given that most Chinese patients receive one-on-one care from their families, or even more, timely assistance from caregivers could play a mitigating role in reducing the risk of falls. 5.3 Implications from fall events An analysis of fall records in this study revealed a concentration of falls between 12:00 and 18:00. This temporal trend can be explained by the fact that most medical treatments are typically administered in the morning, potentially elevating fall risk as patients engage in more autonomous activities during the afternoon. This observation underscores the importance for nurses to be particularly vigilant during the patients’ more active periods, emphasizing the need for enhanced fall prevention measures. 5.4 The application of MFS in obstetrics and gynecology wards demonstrates good effectiveness Through the computation of key metrics, including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy, we assessed the discriminatory capability of the scale. The findings indicate that the MFS exhibits a commendable ability to differentiate fall risk among gynecological and obstetric patients, with an AUC area of 0.772. This performance compares favorably to previous studies, such as Hye-Mi Noh’s investigation, which reported an AUC area of 0.615 in elderly inpatients , and Sikha Bagui’s research with an AUC area of 0.5967 conducted at a community-owned hospital . This implies that the MFS may offer enhanced predictive accuracy in the context of gynecological and obstetric patients compared to its performance in other settings, showcasing its utility as a reliable tool for assessing fall risk in these specialized wards. 5.5. The best optimal cut-off score of MFS We observed that the cut-off score of 40 yields the largest AUC area and propose that the optimal cut-off score for gynecological and obstetric inpatients is 40 points. Although the AUC area for a cut-off score of 40 (0.772) is marginally smaller than the largest AUC area 0.791, it still suggests that the MFS demonstrates competency in predicting a patient’s risk of falling, given that the AUC exceeds 0.7. This aligns with the findings of Sikha Bagui , who conducted a study in a community-owned hospital and recommended adjusting the cut-off value to 40 points (Sensitivity 63.77%, Specificity 50.44%). However, our suggestion differs from Morse’s earlier proposal of a low-risk cut-off score of 25 and a high-risk cut-off score of 51 , Hye-Mi Noh’s recommended cut-off score of 45 , and Baek’s suggestion of 51 points as the optimal threshold . Previous research consistently highlights the influence of physical impairments and weakness associated with aging, contributing to an elevated risk of falling and subsequent injuries . However, our investigation reveals no discernible distinctions in the demographic characteristics of patients between the fall and non-fall groups in gynecological and obstetric wards. This finding contrasts with numerous studies conducted in acute wards, reporting variations in falls based on gender and age . Consistent with previous studies , falls have been associated with surgery. We also observed five cases of falls among inpatients who underwent surgery and used pain-relief medications afterward. However, due to sample size limitations, we were unable to distinguish between the effects of surgery and pain-relief medications on falls. While insomnia and activity levels did not exhibit a notable association. This may be attributed to the distinctive characteristics of women in gynecology and obstetrics wards, predominantly young and fertile, displaying ample physical strength, unrestricted mobility, better sleep quality, and higher activity levels. This observation may explain the absence of age, activity level, and insomnia as discernible fall risk factors in obstetrics and gynecology. In this study, the observed fall rates in gynecology and obstetrics wards were notably lower at 0.504 per 1000 patient days compared to medical units in Boston and New York City using the Patient-Centered Fall-Prevention Tool Kit, where rates ranged from 2.92 to 2.49 per 1000 patient days . Furthermore, these rates were also lower than those reported in a surgical unit in the southeastern United States, which experienced an average monthly preintervention fall rate of 8.67 falls per 1000 patient days, decreasing to 5.07 falls postintervention . The lower decline rate in the obstetrics and gynecology wards in China may be influenced not only by the factors mentioned above but also by the practices of Chinese nursing staff. Given that most Chinese patients receive one-on-one care from their families, or even more, timely assistance from caregivers could play a mitigating role in reducing the risk of falls. An analysis of fall records in this study revealed a concentration of falls between 12:00 and 18:00. This temporal trend can be explained by the fact that most medical treatments are typically administered in the morning, potentially elevating fall risk as patients engage in more autonomous activities during the afternoon. This observation underscores the importance for nurses to be particularly vigilant during the patients’ more active periods, emphasizing the need for enhanced fall prevention measures. Through the computation of key metrics, including sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and accuracy, we assessed the discriminatory capability of the scale. The findings indicate that the MFS exhibits a commendable ability to differentiate fall risk among gynecological and obstetric patients, with an AUC area of 0.772. This performance compares favorably to previous studies, such as Hye-Mi Noh’s investigation, which reported an AUC area of 0.615 in elderly inpatients , and Sikha Bagui’s research with an AUC area of 0.5967 conducted at a community-owned hospital . This implies that the MFS may offer enhanced predictive accuracy in the context of gynecological and obstetric patients compared to its performance in other settings, showcasing its utility as a reliable tool for assessing fall risk in these specialized wards. We observed that the cut-off score of 40 yields the largest AUC area and propose that the optimal cut-off score for gynecological and obstetric inpatients is 40 points. Although the AUC area for a cut-off score of 40 (0.772) is marginally smaller than the largest AUC area 0.791, it still suggests that the MFS demonstrates competency in predicting a patient’s risk of falling, given that the AUC exceeds 0.7. This aligns with the findings of Sikha Bagui , who conducted a study in a community-owned hospital and recommended adjusting the cut-off value to 40 points (Sensitivity 63.77%, Specificity 50.44%). However, our suggestion differs from Morse’s earlier proposal of a low-risk cut-off score of 25 and a high-risk cut-off score of 51 , Hye-Mi Noh’s recommended cut-off score of 45 , and Baek’s suggestion of 51 points as the optimal threshold . Despite the above findings, this study has limitations. Firstly, the use of a purposive sample and a retrospective design imposes constraints on the generalizability of the study. Terry P Haines reported that Design-related bias in evaluations of tool predictive accuracy could lead to overoptimistic results where Retrospective evaluations had significantly higher Youden Indices. Secondly, the MFS was not subjected to a comparative analysis with other tools for assessing fall risk. Therefore, future research employing a prospective design is imperative to validate the applicability of MFS in gynecological and obstetric wards. This study demonstrates the efficacy of the MFS in predicting fall risk within gynecological and obstetric wards. The findings highlight that, specifically for gynecological and obstetric inpatients, the MFS demonstrates good effectiveness and exhibits optimal performance when applied with a cut-off score of 40. |
Remembering Stephen Schwartz | 9b7347c3-980b-4a1e-9536-2737edf99685 | 7255305 | Pathology[mh] | |
null | 401bc2c9-ff28-4c20-9a32-9c70ffb34024 | 11332496 | Internal Medicine[mh] | Acta Oncologica has for several decades supported Nordic cancer-related symposia, and in 2023 a new biannual Acta Oncologica Nordic Precision Cancer Medicine Symposium (NPCM) series was initiated. The first NPCM conference ‘Merging Clinical Research and Standard Healthcare’ took place in Oslo, September 17–19 2023 and was hosted by Oslo University Hospital and the Norwegian Centre for Clinical Cancer Research, MATRIX. Over 2 days, the conference gathered participants from key precision medicine environments from Australia, the US, and Europe. Precision cancer medicine is changing oncology through advanced molecular profiling, innovative clinical trials, and an increasing number of targeted drugs and treatment options. Identified molecular properties may explain why patients with the same type and stage of cancer respond differently to the same treatment. For the precision cancer medicine approach to have an impact and move towards implementation in national healthcare systems, it is essential to have access to both advanced molecular diagnostics and drugs. Although the promise of precision cancer medicine is clear and novel anti-cancer drugs targeting genetic alterations enter the market every year, implementation is still challenging. Access to these approaches is unequal due to varying availability of adequate molecular diagnostics, uncertainties regarding real-world effectiveness, hurdles regarding co-payment and reimbursement, and limited access to clinical trials and early access programmes. Over the last decade, several national initiatives have addressed the challenges with implementation of precision cancer medicine, and during the NPCM 2023 conference, the different initiatives gathered to share and discuss key learnings and synergy potential of international collaboration within this field. The first Nordic Precision Cancer Medicine Symposium brought together experts from different areas important for precision cancer medicine implementation into standard healthcare, and topics addressed during the conference included molecular pathology and molecular tumour boards (MTBs), biomarkers for stratification, clinical study design, DRUP-like clinical trials, scaling of precision medicine ecosystems as well as health economics, implementation, and policies. In this special edition focusing on precision medicine, altogether 10 speakers and poster presenters at the NPCM publish recent precision cancer medicine updates. Three keynote speakers presented new developments within the precision cancer medicine field at the NPCM 2023, including presentations on cutting-edge molecular diagnostics, the Australian implementation initiative, and regulatory developments. Gordon Mills from the Knight Cancer Institute, Oregon Health & Science University, presented a new clinical study design, targeting adaptive responses in cancer. Malignant cells and the tumour environment adapt to therapy. In the Serial Measurements of Molecular and Architectural Responses to Therapy (SMMART) trial, the patient’s cancer is followed over time through serial biopsies and comprehensive analysis of tumour cells and the tumour ecosystem. Drug and drug combinations are subsequently adjusted based on these analyses to avoid resistance. A big challenge of multi-drug treatment is to measure adaptive responses in real-time, and tools beyond RECIST criteria are therefore required. David Thomas from the Garvan Institute of Medical Research in Sydney gave an overview of the Australian precision cancer medicine initiatives. Omico has established comprehensive genomic profiling for patients with advanced or incurable cancer. The national Molecular Screening and Therapeutics study enrols patients with incurable cancer and has so far recruited 750 patients, and the new PrOSPeCT programme is a precision oncology screening platform enabling clinical trials by linking genomic technology to trials of new therapeutic products. Thus, Australian patients with advanced cancer have access to systematic precision cancer medicine. Francesco Pignatti from the European Medicines Agency presented pan-cancer drug development from a regulatory perspective. Pignatti addressed some of the challenges with tumour-independent indications and approving drugs based on single-armed trials. Pignatti concluded that approval of biomarker-driven indications is similar to other approvals in high-unmet need situations. Moreover, the importance of addressing knowledge gaps prior to an approval process was emphasized. The NPCM conference consisted of five conference sessions addressing molecular precision diagnostics and MTBs, design of clinical trials, health economics, implementation and guidelines, scaling of precision medicine ecosystems, and the growing ecosystem of DRUP-like clinical trials. In each session, three internationally invited speakers presented front-line research connected to the topic. In addition, short talks selected from abstract submissions were included. Session one, Molecular pathology and MTBs, addressed advanced precision diagnostics. Access to adequate molecular profiling is crucial for the success of precision medicine. The three invited speakers in this session, Funda Meric-Bernstam from the MD Anderson Cancer Centre in Houston, Texas, Maud Kamal from Institut Curie in Paris, and Lynette Sholl from the Bringham and Women’s Hospital and Harvard Medical School in Boston, highlighted key learnings from ongoing initiatives. Meric-Bernstam emphasized that a comprehensive analysis on DNA/RNA/protein is necessary to improve patient selection and treatment planning. Kamal gave an overview from their MTB and highlighted the need for clinical practice guidelines in genomic testing as well as the need to provide decision support tools and train physicians to interpret genomic data. Sholl gave an overview of the institutional cancer profiling in Boston, where more than 45,000 patients have already been screened. She addressed that 10–20% of cancer patients harbour a germline alteration conferring cancer susceptibility, and that testing for tumour-only misses important germline variants. A paired tumour-germline testing platform has therefore been established and implemented in Boston. Sholl emphasized that operationalising routine germline testing for cancer patients requires substantial inter-disciplinary teamwork. In this Acta Oncologica special edition, two NPCM short talk speakers present new findings highlighting the importance of risk stratification and molecular profiling. The Seibert lab in San Diego addresses risk stratification in prostate cancer screening . Niehusmann et al. focus on molecular profiling and inclusion of CNS-tumour patients in the national IMPRESS-Norway trial, and the paper presents work related to precision diagnostics and therapeutic implications in desmoplastic non-infantile ganglioglioma . Moreover, Fjørtoft et al. in this special issue present a review focusing on the immune microenvironment upon breast cancer progression . Increased understanding of disease mechanisms is important to continue to develop the precision cancer medicine field moving forward. Session two focused on the need for innovative clinical study designs in the field of precision cancer medicine. Richard Schilsky from the University of Chicago presented the Targeted Agent and Profiling Utilization Registry (TAPUR) study , the planning of which inspired several of the European national initiatives, including the DRUP trial in the Netherlands. TAPUR is a pragmatic, multi-basket, non-randomized trial where targeted FDA (U.S. Food & Drug Administration) approved drugs are used outside indication. Results from TAPUR show that 34 cohorts have been completed . Emile Voest from the DRUP study highlighted how a network of DRUP-like clinical trials across Europe collaborate to share data and combine cohorts across trials, greatly enhancing the impact of the individual national initiatives. In this Acta Oncologica special issue, these large European consortia and their impact are described in more detail . Furthermore, there is still a need for new innovative clinical trial designs as highlighted by Christophe Le Tourneau from the Institut Curie in Paris, and Voest also presented the novel DRUP ATTAC study design, offering combinatorial treatment in the presence of multiple molecular targets. Session three addressed how precision cancer medicine challenges established models for reimbursement, and there is, thus, a need for policy invasion to facilitate implementation of precision oncology. Sahar B. van Waalwijk van Doorn-Khosrovani from the National Funder’s Committee for Evaluation of Specialised Medicines and Companion Diagnostics, CZ Health Insurance, The Netherlands explained how the risk-sharing reimbursement model in the DRUP and DAP (Drug Access Protocol) studies addresses the challenges when reimbursement decisions are made based on single-arm trials. The risk-sharing reimbursement models handle uncertainties regarding evidence and costs to maintain the sustainability of the healthcare system. Katarina Steen Carlsson from the Swedish Institute for Health Economics reflected on how existing Health Technology Assessment (HTA) models can be adapted to facilitate reimbursement decisions in precision cancer medicine. Moreover, Bettina Ryll from the Stockholm School of Economics Institute for Research described how a national multi-stakeholder ecosystem is necessary for precision cancer medicine implementation. She also highlighted how the European DRUP-like trial community is a self-organizing open innovation ecosystem interacting with national decision-makers, payers, HTA, commercial sector, and civil society . Monika Frenzel from the French National Research Agency described the European funding programmes for personalised medicine. In particular, Frenzel presented the European Partnership for Personalised Medicine (EP PerMed) programme that was launched towards the end of 2023. This strategic platform will run for 10 years with an approximate budget of 330 million Euros. Session four focused on scaling of precision medicine ecosystems. Technology scaling is a major challenge when broadening precision cancer medicine initiatives to a national level. Jesus Garcia-Foncillas from the Jiménez Diaz Foundation University Hospital in Madrid and Benedikt Westphalen from the Munich Comprehensive Cancer Center shared their experiences in the rapidly evolving precision cancer medicine landscape. Kadri Toome from Tartu University Hospital, presented the results from the Estonian initiative where the National Health Insurance Fund is financing tumour profiling at a national level. Estonia is currently in the process of establishing a DRUP-like clinical trial, EstOPreT . The final session on the growing ecosystem of DRUP-like clinical trials and the European-wide initiatives PCM4EU and PRIME-ROSE , included updates from all ongoing DRUP-like clinical trials in Europe. Hans Gelderblom from Leiden University Medical Center presented the original DRUP trial . The trial opened in 2016 and key elements to the DRUP success include MTBs, good research infrastructures, and involvement of payers and pharmaceutical companies. The latest update from the DRUP trial is presented by Mohammad et al. in this special edition . Moreover, Gelderblom described how the first stage three expansion cohort using nivolumab for treatment of dMMR/MSI solid tumours met evaluation criteria, resulting in reimbursement of this treatment since July 2022 in the Netherlands. The second stage three cohort includes olaparib treatment of patients with BRCA mutated tumours. This cohort will include patients from several DRUP-like clinical trials. Åslaug Helland from Oslo University Hospital gave an update from the IMPRESS-Norway trial . The trial started accrual in April 2021 and has so far included 1167 patients in the molecular profiling phase. Of these, 31% had an actionable molecular alteration and a matching targeted drug eligible for inclusion in the treatment phase of the study . According to Puco et al., 40% of the treated patients showed clinical benefit at 16 weeks . IMPRESS-Norway has started recruitment of patients with biallelic BRCA1/2 inactivation to the stage three olaparib cohort, which is financed through public–private risk-sharing modelled after DRUP. Kristoffer Rohrberg from the Copenhagen University Hospital presented the ProTarget trial , which has been running for 3 years. ProTarget has so far evaluated 5000 genomic profiles and 185 patients have been treated in 112 cohorts. Katriina Jalkanen from the Helsinki University Hospital presented the FINPROVE trial at the conference, and an update is also published in this special issue . The trial opened at the end of 2021, and so far, 310 patients have been evaluated and 85 patients have been offered treatment. Loic Verlingue from Centre Leon Berard in Lyon gave an overview of the multi-centric MOST trials MostPlus and MEGAMOST, with altogether 14 cohorts. MostPlus has so far treated 145 patients, and the latest update from the MOST trial family is presented in this precision cancer medicine edition . The DETERMINE trial in the UK was presented by Matthew Krebs from the University of Manchester. This trial opened in November 2022 and is recruiting via existing national screening programmes. This Acta Oncology special edition presents two additional precision cancer medicine initiatives in Portugal and Hungary , respectively. The recently opened Precision Oncology Platform (POP) trial is pioneering the implementation of a precision cancer medicine strategy in Portugal . Toth et al. describe the application of comprehensive molecular genetic profiling in precision cancer medicine in Hungary , which is the first crucial infrastructure that needs to be in place for successful precision cancer medicine implementation. Altogether, there are several well-established national initiatives, and some of these are described in detail in this special issue. Kjetil Taskén from the Oslo University Hospital rounded off the NPCM conference with an overview of how the DRUP-like clinical trial communities collaborate through the EU-funded initiatives PCM4EU and PRIME-ROSE as also described in this issue . This first ACTA Oncologica Nordic Precision Cancer Medicine Symposium gathered renowned speakers from all over the world and facilitated increased international collaboration. The talks sparked good discussions and a vibrant and interactive environment. The next conference is planned for 2025. In this Acta Oncologica special issue, some of the addressed topics and relevant updates are described in more detail. |
Development of a risk assessment scale for use by nurses to assess the risk of deep vein thrombosis in gynaecology in China: A Delphi‐based study | 96f6b171-ef2d-4ab8-b285-610d52a0a0d7 | 10277397 | Gynaecology[mh] | INTRODUCTION Deep vein thrombosis (DVT) is a disorder of venous return caused by abnormal blood clotting in the deep veins; consequently, the lumen of the vein is obstructed. Epidemiological surveys show that there are about 10 million new patients with DVT every year in the world (Kakkos et al., ). DVT is mostly asymptomatic and hidden, and it is easy to miss and misdiagnose. DVT syndrome in the later stage will cause pain, swelling, superficial varicose veins, skin changes (pigmentation, eczema, sclerosis), etc., and severe venous ulcers will occur. The total loss of labour has seriously affected the survival and quality of life of patients. Therefore, it is of great significance to accurately predict patients' high‐risk factors. The incidence of venous thrombosis in the United States is as high as 0.0816% (Liao et al., ). The incidence of venous thromboembolism (VTE) in Asian populations shows a year‐on‐year increasing trend (Kafeza et al., ; Lee et al., ). In China, the incidence of DVT in patients who have had a gynaecological procedure in the absence of thromboprophylaxis is 9.2%–15.6%, the incidence of pulmonary embolism (PE) is 46% (Li & Jia, ; Liu et al., ; Xiong & Xu, ), and the mortality due to DVT is second only to that of tumours and myocardial infarction. Various studies (Alexander et al., ) have shown that more than half of the inpatients are at risk of developing VTE; however, only half of the high‐risk patients receive preventive measures against VTE. Therefore, a correct understanding of the risk factors of VTE in inpatients and the use of an accurate risk prediction model with high sensitivity and specificity to predict the risk factors for VTE in gynaecological inpatients are conducive to improving the prevention and reducing the incidence and mortality of DVT. BACKGROUND In recent years, as domestic and foreign scholars have continued to deepen the research on DVT, DVT risk assessment tools have been continuously improved. Specialized DVT risk assessment tools and universal risk assessment tools have emerged one after another, and targeted preventive measures have been formed according to the causes of different diseases. Gynaecological DVT risk assessment scales include Caprini (Caprini, ), and G‐Caprini scale (Lang et al., ). The Caprini scale has a low screening rate for high‐risk populations; the G‐Caprini scale is suitable for VTE surgical classification after gynaecological surgery, and is not suitable for the use of superovulation drugs, peripheral venous catheters and uterine artery embolism (24 h) patients. In 2017, Chinese gynaecological medical experts formulated the ‘Expert Consensus on the Prevention of DVT and PE after Gynecological Surgery’ (Lang et al., ) based on the characteristics of Chinese ethnicity and disease (Lang et al., ), but this consensus is only suitable for the assessment of DVT in Chinese patients after gynaecological surgery. In 2019, Chinese nursing experts formulated the ‘Accelerated Rehabilitation Gynecological Perioperative Nursing Chinese Expert Consensus’ (Bo et al., ), suggesting that patients with gynaecological perioperative surgery should use the Caprini scale to assess DVT. The Caprini scale has high sensitivity for screening high‐risk DVT patients, but the Caprini scale is currently used to predict the risk assessment of DVT in hospitalized patients such as medical and surgical patients, and it is widely used in Western populations. However, due to ethnic differences between the East and the West, as well as the characteristics of disease incidence and the level of medical technology, the risk factors implemented in the above scales are not completely applicable to the Chinese population; therefore, these scales are relatively less useful in China (Bo et al., ; Lang et al., ). At present, there is no risk assessment scale suitable for patients with gynaecological diseases in my country. It is necessary to combine the characteristics of Chinese gynaecological patients to establish a DVT assessment scale for gynaecological patients in line with national conditions. Based on the improvement of the Caprini scale, the author combined with the expert consensus on the prevention of deep vein thrombosis and pulmonary embolism after gynaecological surgery, adopted the Delphi method, and after two rounds of expert consultation, developed a DVT thrombosis assessment scale for inpatients in Chinese gynaecology—a new type of DVT assessment scale. METHODS 3.1 Study design A Delphi method was used in this study. This method involves the submission of questionnaires to expert groups in the relevant fields to solicit their opinions in order to reach a consensus on specific issues (Keeney et al., ; McKenna, ). The initial list of our survey items was generated based on the Caprini risk assessment scale combined with expert consensus regarding the prevention of DVT and PE after gynaecological surgery in China, and the characteristics of gynaecological diseases. From June to July 2020, 11 experts involved in research in the gynaecological field were invited to participate in this study. A purposive sample of adherence experts was surveyed to arrive at recommendations. After three rounds of expert consultation, the opinions of the experts tended to be unanimous, and an evaluation form was formed on this basis. 3.2 Sampling procedures Through sample selection, we ensured diversity among the experts and their geographical locations. Consistent with the empirical recommendations regarding the size of the Delphi team (Akins et al., ), we aimed to consult a minimum of 10 experts in each round of the Delphi survey. To make provision for the refusal to participate by 1 of the experts consulted or a reduction in the initial number of experts, 11 experts were invited to participate in the study. 3.3 Delphi method 3.3.1 Expert selection The experts were affiliated with 8 hospitals distributed in 4 cities across China, namely, Shenzhen, Guangzhou, Nanchang and Shanghai; all of these hospitals formed part of the top three hospitals in their respective cities. The expert group comprised 5 gynaecologists and 6 gynaecological nurses. The qualifications of each gynaecologist included a master's degree or above and a senior professional title. The qualifications of each gynaecological nurse included an undergraduate degree or above and a senior professional title. All the experts were required to have a minimum of 10 years of clinical experience in the field of gynaecology. Table presents the demographic characteristics of the experts. The inclusion criteria were as follows: 1. the nursing management expert must have occupied the position of director, deputy director or head nurse of the nursing department and must have been engaged in clinical nursing management for more than 10 years in a Grade A general hospital; 2. the clinical nursing expert must have been engaged in gynaecological clinical nursing for more than 10 years; 3. the gynaecologist must have had more than 10 years of clinical experience in gynaecology. 3.3.2 Questionnaire survey The first round of the questionnaire survey was conducted among the 11 experts and began in October 2019. All experts received the electronic questionnaires via email. The research brief and demographic survey tool were attached to the 40‐item questionnaire. For the second round of the questionnaire survey, a new questionnaire was formulated based on the results of the first round of consultation and implemented in the second round in February 2020. The second round of consultation involved the participation of all 11 experts; no demographic survey or summary of the results of the first round of consultation was made available to the experts. The third round of consultation aims to assess the rationality of the risk factors selected by the experts in the first two rounds, then developing the frequency of assessments for the DVT scale. we negotiated with 6 experts in Shenzhen through face‐to‐face seminars, 5 other experts were consulted via video conference to discuss whether the tool met the requirements of clinical nursing. 3.4 Data collection and analysis The data were collected from October 2019 to February 2020, using two rounds of expert consultation based on the Delphi approach. The data collection was completed by 3 researchers. We evaluated the consensus among the responses to the questionnaire. Discrete data were expressed as frequency and percentage, whereas continuous data were expressed as mean, standard deviation, and coefficient of variation. The coefficient of variation (CV) was used to evaluate the consensus among the items in the questionnaire. The consensus for each item was defined as a mean rating of >3 and a CV of <0.5 in the first round, as CV <0.3 in the second round (Zhou et al., ).The consensus among experts was evaluated using Kendall's coefficient (W) of concordance (Klastersky & Paesmans, ; Zhou et al., ). Statistical significance was defined as p < 0.05. All data were analysed using the SPSS software version 23 (SPSS Inc.). 3.5 Ethics statement The study was approved by the ethics committee of our hospital, and informed consent was obtained from all participants. Study design A Delphi method was used in this study. This method involves the submission of questionnaires to expert groups in the relevant fields to solicit their opinions in order to reach a consensus on specific issues (Keeney et al., ; McKenna, ). The initial list of our survey items was generated based on the Caprini risk assessment scale combined with expert consensus regarding the prevention of DVT and PE after gynaecological surgery in China, and the characteristics of gynaecological diseases. From June to July 2020, 11 experts involved in research in the gynaecological field were invited to participate in this study. A purposive sample of adherence experts was surveyed to arrive at recommendations. After three rounds of expert consultation, the opinions of the experts tended to be unanimous, and an evaluation form was formed on this basis. Sampling procedures Through sample selection, we ensured diversity among the experts and their geographical locations. Consistent with the empirical recommendations regarding the size of the Delphi team (Akins et al., ), we aimed to consult a minimum of 10 experts in each round of the Delphi survey. To make provision for the refusal to participate by 1 of the experts consulted or a reduction in the initial number of experts, 11 experts were invited to participate in the study. Delphi method 3.3.1 Expert selection The experts were affiliated with 8 hospitals distributed in 4 cities across China, namely, Shenzhen, Guangzhou, Nanchang and Shanghai; all of these hospitals formed part of the top three hospitals in their respective cities. The expert group comprised 5 gynaecologists and 6 gynaecological nurses. The qualifications of each gynaecologist included a master's degree or above and a senior professional title. The qualifications of each gynaecological nurse included an undergraduate degree or above and a senior professional title. All the experts were required to have a minimum of 10 years of clinical experience in the field of gynaecology. Table presents the demographic characteristics of the experts. The inclusion criteria were as follows: 1. the nursing management expert must have occupied the position of director, deputy director or head nurse of the nursing department and must have been engaged in clinical nursing management for more than 10 years in a Grade A general hospital; 2. the clinical nursing expert must have been engaged in gynaecological clinical nursing for more than 10 years; 3. the gynaecologist must have had more than 10 years of clinical experience in gynaecology. 3.3.2 Questionnaire survey The first round of the questionnaire survey was conducted among the 11 experts and began in October 2019. All experts received the electronic questionnaires via email. The research brief and demographic survey tool were attached to the 40‐item questionnaire. For the second round of the questionnaire survey, a new questionnaire was formulated based on the results of the first round of consultation and implemented in the second round in February 2020. The second round of consultation involved the participation of all 11 experts; no demographic survey or summary of the results of the first round of consultation was made available to the experts. The third round of consultation aims to assess the rationality of the risk factors selected by the experts in the first two rounds, then developing the frequency of assessments for the DVT scale. we negotiated with 6 experts in Shenzhen through face‐to‐face seminars, 5 other experts were consulted via video conference to discuss whether the tool met the requirements of clinical nursing. Expert selection The experts were affiliated with 8 hospitals distributed in 4 cities across China, namely, Shenzhen, Guangzhou, Nanchang and Shanghai; all of these hospitals formed part of the top three hospitals in their respective cities. The expert group comprised 5 gynaecologists and 6 gynaecological nurses. The qualifications of each gynaecologist included a master's degree or above and a senior professional title. The qualifications of each gynaecological nurse included an undergraduate degree or above and a senior professional title. All the experts were required to have a minimum of 10 years of clinical experience in the field of gynaecology. Table presents the demographic characteristics of the experts. The inclusion criteria were as follows: 1. the nursing management expert must have occupied the position of director, deputy director or head nurse of the nursing department and must have been engaged in clinical nursing management for more than 10 years in a Grade A general hospital; 2. the clinical nursing expert must have been engaged in gynaecological clinical nursing for more than 10 years; 3. the gynaecologist must have had more than 10 years of clinical experience in gynaecology. Questionnaire survey The first round of the questionnaire survey was conducted among the 11 experts and began in October 2019. All experts received the electronic questionnaires via email. The research brief and demographic survey tool were attached to the 40‐item questionnaire. For the second round of the questionnaire survey, a new questionnaire was formulated based on the results of the first round of consultation and implemented in the second round in February 2020. The second round of consultation involved the participation of all 11 experts; no demographic survey or summary of the results of the first round of consultation was made available to the experts. The third round of consultation aims to assess the rationality of the risk factors selected by the experts in the first two rounds, then developing the frequency of assessments for the DVT scale. we negotiated with 6 experts in Shenzhen through face‐to‐face seminars, 5 other experts were consulted via video conference to discuss whether the tool met the requirements of clinical nursing. Data collection and analysis The data were collected from October 2019 to February 2020, using two rounds of expert consultation based on the Delphi approach. The data collection was completed by 3 researchers. We evaluated the consensus among the responses to the questionnaire. Discrete data were expressed as frequency and percentage, whereas continuous data were expressed as mean, standard deviation, and coefficient of variation. The coefficient of variation (CV) was used to evaluate the consensus among the items in the questionnaire. The consensus for each item was defined as a mean rating of >3 and a CV of <0.5 in the first round, as CV <0.3 in the second round (Zhou et al., ).The consensus among experts was evaluated using Kendall's coefficient (W) of concordance (Klastersky & Paesmans, ; Zhou et al., ). Statistical significance was defined as p < 0.05. All data were analysed using the SPSS software version 23 (SPSS Inc.). Ethics statement The study was approved by the ethics committee of our hospital, and informed consent was obtained from all participants. RESULTS The questionnaires were completed by 11 experts in both rounds. All the responses were comprehensive and on‐topic. 4.1 Expert group The expert group consisted of 11 experts who expressed interest in participating in the study and provided informed consent. The group was composed of 11 female physicians with an average clinical experience of 23.091 ± 8.561 years; 5 senior titles, 3 deputy senior titles and 11 intermediate titles; 2 doctorate degrees, 3 master's degrees and 6 undergraduate degrees. 4.2 Round one Table gives statistics showing a value of 0.264 (p < 0.001) for Kendall's W was obtained in the first round of consultations . The mean values of the ratings and the CVs of the items for this round are shown in Table . The average score in the questionnaire was <3, whereas the CV was >0.5 deleted the item. In addition to rating the items, the experts included some comments, questions and suggestions in the questionnaire. The following issues were discussed among the researchers: 4 items relating to body mass index (BMI) were modified; 4 items pertaining to surgical time were deleted, whereas 2 items were added; 1 item was added to the dimension of trauma risk; 1 item relating to gynaecological risk was modified; 2 items were added to the surgical history; and 6 items were excluded from the laboratory examination section. 4.3 Round two Kendall's W for the second round of consultation was 0.322 (p < 0.001) as shown in Table . The scoring method and summary of this round are shown in Table . The consensus for each item in the second round was defined as a CV <0.3. Kendall's W was 0.34 ( p < 0.01), and 1 item from the medical history section was deleted. 4.4 Round three A total of 34 items remained after two rounds of expert consultation. The draft was modified depending on the 11 experts' suggestions. A draft on how to assess gynaecological DVT was developed. The process of DVT scale assessment frequency based on the tool is shown in Figure . Expert group The expert group consisted of 11 experts who expressed interest in participating in the study and provided informed consent. The group was composed of 11 female physicians with an average clinical experience of 23.091 ± 8.561 years; 5 senior titles, 3 deputy senior titles and 11 intermediate titles; 2 doctorate degrees, 3 master's degrees and 6 undergraduate degrees. Round one Table gives statistics showing a value of 0.264 (p < 0.001) for Kendall's W was obtained in the first round of consultations . The mean values of the ratings and the CVs of the items for this round are shown in Table . The average score in the questionnaire was <3, whereas the CV was >0.5 deleted the item. In addition to rating the items, the experts included some comments, questions and suggestions in the questionnaire. The following issues were discussed among the researchers: 4 items relating to body mass index (BMI) were modified; 4 items pertaining to surgical time were deleted, whereas 2 items were added; 1 item was added to the dimension of trauma risk; 1 item relating to gynaecological risk was modified; 2 items were added to the surgical history; and 6 items were excluded from the laboratory examination section. Round two Kendall's W for the second round of consultation was 0.322 (p < 0.001) as shown in Table . The scoring method and summary of this round are shown in Table . The consensus for each item in the second round was defined as a CV <0.3. Kendall's W was 0.34 ( p < 0.01), and 1 item from the medical history section was deleted. Round three A total of 34 items remained after two rounds of expert consultation. The draft was modified depending on the 11 experts' suggestions. A draft on how to assess gynaecological DVT was developed. The process of DVT scale assessment frequency based on the tool is shown in Figure . DISCUSSION To the best of our knowledge, there is currently no DVT risk assessment scale that is suitable for use in all patients with gynaecological conditions in China; therefore, the Delphi expert panel is crucial to the research regarding the development of a DVT risk assessment scale (Keeney et al., ; McKenna, ). The Delphi team consisted of gynaecologists and gynaecological nurses. Experts involved in the present study came from eight hospitals in four provinces. Their opinions can be considered to be geographically representative. The professional background of these experts helped to ensure the efficacy of the research. Since Kendall's W in the first two rounds were 0.264 and 0.322, respectively, with p < 0.001, we concluded that consensus among the experts was statistically significant. The first two rounds of consultations aimed to refine the questionnaire items according to experts' rating: 4 items were modified, 11 were deleted and 5 were developed during the process. The first round of expert consultation led to a refinement of the items in the questionnaire based on the opinions provided by the experts. 4 items pertaining to BMI that were not in line with the body composition of the Chinese population were changed. The high score attributed to the surgical procedure in the Caprini risk assessment scale leads to an increase in the false‐positive rate of DVT and an increase in the number of interventions targeting DVT (Qu et al., ). On the basis of the expert recommendations and consensus on the prevention of DVT after gynaecological surgery, 4 items with high scores in the surgical section in the Caprini risk assessment scale were deleted, and 2 items with an individual score of 1 were added to the section, namely open surgery. A section pertaining to the patient's surgical history was added as follows: two or more operations within a month and uterine artery embolization (within 24 h of the operation). After uterine artery embolization, slow blood flow, which is a risk factor for DVT, was caused by immobilization of the affected limb for 24 h. An increase in the score attributed to the gynaecological risk caused by the use of superovulation drugs (within 1 month of DVT presentation) was recommended by 7 of the 11 experts. The large amount of oestrogen found in superovulation drugs leads to an increase in the levels of fibrinogen and clotting factors and a decrease in antithrombin levels; these effects lead to a hypercoagulable state of the blood. In addition, the use of superovulation drugs can cause ovarian hyperstimulation syndrome, leading to an increase in vascular permeability and hypercoagulability. Regarding the trauma risk section, experts recommended increasing the score attributed to peripheral intravenous catheterization (in the form of a peripherally inserted central‐line catheter [PICC] or a port); their rationale for this was that postoperative chemotherapy in patients with gynaecological malignancies requires the implantation of a PICC or a port, and damage to the vein wall is a significant risk factor for DVT. In China, laboratory investigations are rarely used to assess the presence of DVT in patients with gynaecological conditions. Therefore, the 6 items from the laboratory evaluation section were deleted on the basis of the experts' recommendations. In the second round of expert consultation, 1 item from the medical history section was deleted. The risk assessment scale consists of the following eight sections: age, BMI, athletic ability, gynaecological risk, trauma risk, medical history, surgical history and surgical time; these sections are further divided into a total of 34 categories. The third round of experts passed the opinions of videos and symposiums have basically stabilized. After the questionnaires collected in the third round are sorted out, the final prediction can be made. A total of 34 items were retained in the third‐round consultation, and then discussed the frequency of assessments for the DVT scale. The total score obtained from the scale can classify patients into categories of low risk, medium risk and high risk. A higher score indicates a greater risk of DVT (Caprini, ; Gould et al., ; Hostler et al., ). These items may enable nurses to screen patients who are at a high risk of DVT. Consequently, interventions can be implemented for the high‐risk group to reduce the incidence of DVT. The frequency of assessments for the DVT scale is shown as Figure . This study aimed to construct a DVT risk assessment scale suitable for use in the Chinese population to assess the risk of DVT in patients with gynaecological conditions. The scale takes into account Chinese ethnic characteristics and the medical technology available in China. Furthermore, experts who participated in this study have worked in the relevant fields for at least 15 years. Their clinical experience has familiarized them with the local clinical practices in China. Compared with the existing DVT risk assessment scales available in other countries (Angchaisuksiri, ; Caprini, ), the BMI of the DVT risk assessment scale in this study was modified in order to accommodate Chinese ethnic characteristics. Therefore, we have succeeded in developing an efficacious risk assessment scale for DVT in Chinese patients with gynaecological conditions. 5.1 Limitations Our study has certain limitations. The experts who participated in this study originated from the south‐eastern coastal region of China, which is the most developed region in the country. Owing to a geographic bias, the scale should be customized before it can be applied to other regions of China. Additional research is currently underway to determine whether the use of this scale can lead to a reduction in the incidence of DVT in patients with gynaecological conditions. Limitations Our study has certain limitations. The experts who participated in this study originated from the south‐eastern coastal region of China, which is the most developed region in the country. Owing to a geographic bias, the scale should be customized before it can be applied to other regions of China. Additional research is currently underway to determine whether the use of this scale can lead to a reduction in the incidence of DVT in patients with gynaecological conditions. CONCLUSION The modified Caprini risk assessment scale has high sensitivity in screening high‐risk patients with DVT and can serve as a timely warning of the occurrence of DVT. The score and risk level obtained from the scale can effectively guide medical staff towards the early prevention of DVT, thereby leading to a reduction in the incidence of DVT. We believe that the developed risk assessment scale may promote patient safety, reduce hospitalization time, and diminish medical costs. Study design: Xiaoying LI; data collection: Yun LONG, Fang HE; data analysis: Yun LONG, Mengzhen DONG; study supervision: XiaoyingLI, Shening ZHU; manuscript writing: Yun LONG; critical revisions for important intellectual content: Yun LONG, Xiaoying LI, Fang HE, Shening ZHU, Wenfeng ZHU. The authors declare that they have no competing interests. The study was approved by the ethics committee of Shenzhen Maternal & Child Healthcare Hospital and informed consent was obtained from all participants. |
Surgeon’s perceptions and preferences in the management of idiopathic macular hole | f2ba1e48-c738-4f4a-9b37-8ea50731001f | 11834932 | Surgical Procedures, Operative[mh] | A cross-sectional observational study was performed constituting a survey addressing the current practice patterns of vitreoretinal surgeons in India on macular hole surgery. The institutional review board approval was obtained. The study adhered to the tenets of the Declaration of Helsinki. A 12-item questionnaire was designed in the English language, keeping in mind the key aspects of macular hole surgery where inter-surgeon variation is more likely to be present. Most of the questions were close-ended with options to choose and only a few questions required descriptive answers. All 12 questions were optional to answer. The questionnaire was sent by personal correspondence (email) to 104 vitreoretinal specialists, actively practicing and performing iMH surgeries at various institutes across the country between October 2022 to November 2022, who then returned their responses on the same platform. A reminder was sent to those who had not responded by the end of December 2022. The responses were received till January 2023, and the data were exported to an Excel sheet. Statistical analysis was performed using the SPSS 23.0 software. The response to individual questions was analyzed for the whole cohort. The analyses of variables with continuous data were performed as mean (with standard deviation) and median (with the range), while the categorical data were expressed as frequency. Ninety-one retina specialists responded to the survey with a response rate of 87.5% (91/104). The median surgical load of the surgeons was 30 iMH surgeries per year (range: 5–150), and 52.9% (45/85) of the respondents performed more than 20 surgeries per year. Furthermore, 80% (73/91) of the participants were affiliated to tertiary care academic institutions, and the rest were stand-alone practitioners. The majority of the surgeons (84.3%, 74/89) performed surgery even in elderly patients >75 years, one-eyed patients (78.7%, 70/89), and chronic cases of more than 1-year duration of iMH (71.6%, 63/88). However, 60% (53/89) of surgeons did not perform surgery in flat MH with complete PVD. Nearly all the surgeons (97.8%, 89/91) preferred brilliant blue G (BBG) dye for staining the ILM, while the remaining two respondents preferred trypan blue dye. In phakic eyes with visually insignificant cataracts, 34.1% (31/91) of surgeons would choose to perform combined cataract surgery with MH surgery, while the rest would prefer to perform cataract surgery on follow-up when required. The preferred instruments for initiation of ILM peel and creating a flap were ILM forceps (82.2%, 74/90), Finesse loop (11.1%, 10/90), membrane scrapper (4.4%, 4/90), or microvitreoretinal (MVR) blade (2.2%, 2/90). The preferred approach to peeling ILM was “pinch and peel” (80%, 72/90), followed by “scrape and peel” (15.6%, 14/90) and “incise and peel” (3.3%, 3/90). However, the preferred approach recommended for beginners or trainee surgeons was “scrape and peel” (50.6%, 40/79), followed by “pinch and peel” (40.5%, 32/79) and “incise and peel” (6.3%, 5/79). The preferred distance to initiate peel was 2 disc-diameter (DD) from the MH (52.9%, 46/87), followed by 1 DD (34.5%, 30/87). Furthermore, 12.6% of respondents (11/87) chose variable distance depending on the individual case profile. While the majority did not prefer any quadrant to initiate the peel, 43.7% (38/87) of respondents preferred a fixed area to initiate the peel. Of these 38 respondents, 18 preferred the inferotemporal quadrant to initiate the peel. Nearly all the respondents (97.1%, 68/70) initiated peel away from the blood vessels. The average number of attempts before achieving a satisfactory ILM fracture was 1–2 in 60.5% (52/86) and >2 in 39.5% (34/86) respondents. Most surgeons did not consider massaging the hole edges (80.7%, 67/82) or draining through the MH (75.9%, 63/83). The preferred vitreous substitute for tamponade was sulfur hexafluoride (SF6) gas (52.7%, 48/90), followed by perfluoropropane (C3F8) gas (41.8%, 38/90) and air (4.4%, 4/90). Postoperative prone position was recommended for 3–7 days by the majority (89%, 81/91), followed by 1 day (8.8%, 8/91) and 6 hours (2.2%, 2/91). Many respondents considered MH as large when the minimum linear dimension on OCT was >600 microns (42.2%, 38/90). In addition, 31.1% of surgeons (28/910) considered MH >800 microns as large, while only 18.9% (17/90) considered it large if the size was above 400 microns. Four surgeons considered >1000 microns as the definition for large MH. The preferred approaches in large MH were classic inverted ILM flap (48.9%, 44/90), multilayer flap (26.7%, 24/90), temporal flap (11.1%, 10/90), and traditional ILM peel (8.9%, 8/90). The other responses obtained were traditional ILM peel with massage of MH edge, temporal large rhexis, radial ILM peel, and multilayer flap with application of platelet-rich plasma (one respondent each). The approaches preferred in persistent iMH despite surgery with ILM peeling were free ILM flap (49.4%, 44/89), repeat fluid-gas exchange (12.4%, 11/89), autologous retinal graft (6.7%, 6/89), macular detachment and tamponade (3.4%, 3/89), amniotic membrane graft (AMG, 2.2%, 2/89), platelet rich plasma application (1), and temporal large rhexis (1). Four surgeons considered AMG, and one considered lens capsule transplant as an additional but not preferred option for failed surgery. Furthermore, 24.7% of surgeons (22/89) did not consider further intervention in failed cases. Lastly, the three most important prognostic factors for visual outcomes after MH surgery (in order of importance) as perceived by survey participants were the duration of MH (46.2%, 42/91), preoperative vision (40.4%, 36/91), and MH size (37.5%, 33/91). OCT indices, status of PVD, size of peel, type of tamponade, and compliance to positioning were not considered among the three most important prognostic factors by most surgeons. The treatment for idiopathic MH is vitrectomy, ILM peeling, and gas tamponade. However, there exist different views on individual aspects of the surgery, and this study assessed the current practices in India. The survey included responses from vitreoretinal surgeons with good surgical experience and workload. Nearly 85% of surgeons would operate even on elderly patients >75 years. MH surgery can lead to significant improvement in the quality of life in elderly patients. Therefore, old age may not be considered a criterion to recommend against MH surgery. In the current times, the improvements in microsurgical techniques have made vitrectomy and ILM peeling a safe procedure. This could be the reason that nearly 80% of the surgeons operated on one-eyed patients as well. Chronic iMH has been described variably as MH with a duration of more than 6 months, 1 year, or 2 years. The outcome studies in chronic iMH, which date back to the 1990s, have shown MH closure rates varying from 63% to 95%. While some authors have cautioned against surgery in chronic MH, most studies have reported an average of 2–3 Snellen line improvement. Chronic iMH may thus benefit from surgery, and some useful vision could be obtained, and this would be the reason why nearly three-fourths of the respondents in the survey would operate on chronic iMH cases. Almost all surgeons used BBG dye for staining the ILM. Compared to indocyanine green, newer dyes such as BBG, trypan blue, or a combination of these with polyethylene glycol or deuterium are much safer. While trypan blue also stains epiretinal membranes, BBG is specific for ILM and remains the first choice for ILM staining. Vitrectomy for MH increases the risk of development and progression of cataract. Only one-third of surgeons performed phaco-vitrectomy in eyes with insignificant cataract. While a combined phaco-vitrectomy will avoid a second surgery, the combined approach may have a higher risk of intraoperative complications such as corneal edema, poor visualization, and iatrogenic retinal injury and postoperative issues such as posterior synechiae formation, high intraocular pressure, unpredictable refractive outcomes, and MH non-closure. A recently performed systematic review and meta-analysis found no significant difference in the outcomes and complications between combined and sequential surgery groups. However, most studies included in the meta-analysis were retrospective or low-moderate quality trials. ILM peel can be initiated by forming a flap with the help of forceps (pinch technique), MVR blade, pick, diamond-dusted membrane scraper, or serrated nitinol loop (Finesse loop). The degree of dissociated optic nerve fiber layer appearance with scrapping ILM is greater than after pinching it. Most of the surgeons in the survey used ILM forceps, and only a few preferred loop or scraper. However, many of them advocated the “scrape-and-peel” technique for beginners as experience and dexterity in hand control are required to avoid pinching the retinal tissue with forceps. For similar safety reasons perhaps, nearly all surgeons preferred initiating the peel away from blood vessels. Nearly half of the surgeons initiated ILM peel 2 DD away from the MH, and one-fifth of surgeons preferred the inferotemporal quadrant. While there is no standard location to initiate the peel, the ILM is thickest and has maximum rigidity at nearly 1 mm from the foveal center. Therefore, it may be easier and also safer to initiate the peel away from the foveal center. Massaging the MH edge with a scraper, Finesse loop, or soft-tip cannula and draining through the MH with a soft-tip extrusion cannula have been considered by some authors to mobilize the adjacent retina and assist in hole closure in chronic, large, or persistent MH. In 2007, Alpatov et al . described the technique of tapping the MH edges from periphery to center with a vitreal spatula; since then, more surgeons have explored its role either alone or with other adjuvant maneuvers such as macular detachment or drainage through the MH. However, retinal massage has been associated with retinal pigment epitheliopathy, and drainage through MH can also damage the underlying retinal pigment epithelium (RPE) and adjacent photoreceptors. Nearly 80% of the surgeons did not consider these techniques in their practice. The preferred gas for endotamponade was SF6, with C3F8 being a close second option. Gas tamponade helps by reducing the fluid flow across the MH and brings the edges of MH closer by the action of interfacial surface tension. Initially, the choice of gas preferred was C3F8 gas, which would maintain these functions for a longer period and perhaps improve the closure rates, but now there has been a gradual shift to the use of shorter-acting SF6 gas. The understanding behind hole closure has improved, and we now know that factors other than choice of tamponade also play a role, such as type and duration of postoperative posture, fill of gas, chronicity of MH, and retinal compliance. The current evidence is weak and does not support the use of a particular gas. Conventionally, a prone position is advised for a duration of 3 days–1 week after MH surgery, and this practice was also observed in this survey. There is no definitive evidence regarding the need and duration of prone or face-down positioning (FDP) after MH surgery. However, closure rates and visual gains are better in the subset of large MH (>400 microns) with FDP versus other positions. What seems more relevant is the fill of gas; if it is more than 50% fill for a desired duration, then adequate MH tamponade could be achieved with the propped-up position as well. Larger MH may need tamponade for a longer duration (5–7 days) and hence these cases benefit from FDP. Traditionally, MHs >400 microns in size have been considered as large, and additional maneuvers such as ILM flap and those aimed at improving retinal compliance have been advocated to improve closure rates in this category. In 2013, even the IVMT study group classified MH >400 microns as large. Recently, it has been noted the holes larger than this close to a certain extent without additional maneuvers with comparable visual gains. In 2018, the Manchester Large Macular Hole Study reported MH >650 microns as large as these failed to close or had type 2 closure with only ILM peel. In 2021, the BEAVRS Macular Hole Outcome group suggested using 500 microns as the cutoff for large MH as beyond this the success rate starts to decline. Most recently, the CLOSE study group reported that ILM peel is enough for MH <530 microns, while MH between 535 and 800 microns need ILM flap techniques, and MH >800 microns need more invasive maneuvers such as macular hydrodissection or detachment, amniotic membrane transplant, or retinal autografts. Along similar lines, most of the surgeons in the survey reported considering MH >600 microns as large, followed by a close second option of >800 microns. In the category of large MH, the respondents preferred inverted or multilayered ILM flaps. The inverted ILM flap technique, originally described by Michalewska et al ., provides a higher anatomical closure rate as well as visual gain in large MH than only ILM peel by providing a bridge of glial tissue that contracts and brings the MH edges together. Very large MH require more invasive techniques or a combination of these techniques aimed at increasing retinal compliance (arcade to arcade peel or arcuate temporal retinotomy), freeing the retina from underlying adhesion with RPE (macular hydrodissection or detachment), promoting further glial reaction (platelet-rich plasma or autologous serum application), replacing the dead space with retinal tissue (retinal autografts), and bridging the space with other tissues (amniotic membrane or lens capsule). For failed, persistent, or recalcitrant cases, all the surgical methods mentioned above for very large MH could be used. However, the more commonly used technique is the free ILM flap. In our study also, the surgeons preferred either free flaps or repeat fluid-air exchange with longer-acting gas tamponade. The visual prognosis in repeat MH surgery tends to remain suboptimal despite anatomical closure. This explains why close to one-fourth of the surgeons in the survey did not consider repeat intervention. Several factors have been identified as predictive of visual outcomes after successful MH surgery, including preoperative visual acuity, MH duration, age, lens status, MH size, OCT parameters, autofluorescence patterns, type of tamponade, and duration of FDP. Among these, the study participants perceived duration of MH, preoperative vision, and MH size as the most important predictors. In 2021, Fallico et al . reported symptom duration as the most important predictor, followed by preoperative visual acuity and MH size. These factors indirectly reflect the condition of the external limiting membrane, ellipsoid zone, and interdigitation zone, which are key structures integral to the visual function. Inherent to most surveys, the current study had limitations of coverage bias, sampling bias, non-response bias, and recall bias of the respondents. The survey does not reflect the practice of all vitreoretinal specialists in the country. Being cross-sectional, the data applies only to the current times as MH surgery practices are continuously evolving. The study surveyed only the surgical approach and not the anatomical and functional outcomes of MH surgery. This survey revealed the current practice patterns of experienced vitreoretinal surgeons performing MH surgery in India and found them in accordance with the available evidence. There is a need to revisit and reassess the existing classification system for MH size and patient selection criteria, standardize the ILM peel techniques, particularly for beginners, and determine the most beneficial additional surgical maneuvers as per MH size and configuration. Abbreviations iMH: idiopathic macular hole; ILM: internal limiting membrane; PVD: posterior vitreous detachment; IVMT: international vitreomacular traction; BBG: brilliant blue G; SF6: sulfur hexafluoride; C3F8: perfluoropropane; AMG: amniotic membrane graft; FDP: face-down positioning; RPE: retinal pigment epithelium; ELM: external limiting membrane; EZ: ellipsoid zone; IZ: interdigitation zone. Conflicts of interest: There are no conflicts of interest. iMH: idiopathic macular hole; ILM: internal limiting membrane; PVD: posterior vitreous detachment; IVMT: international vitreomacular traction; BBG: brilliant blue G; SF6: sulfur hexafluoride; C3F8: perfluoropropane; AMG: amniotic membrane graft; FDP: face-down positioning; RPE: retinal pigment epithelium; ELM: external limiting membrane; EZ: ellipsoid zone; IZ: interdigitation zone. There are no conflicts of interest. |
Co-inoculation with | 84eedb51-79be-4907-9a76-433140bdc819 | 11796456 | Microbiology[mh] | Bradyrhizobium spp. establish a symbiotic relationship with soybean plants, a phenomenon exploited by agricultural practices due to increased nitrogen fixation and grain yield, which reduces the reliance on inorganic nitrogen fertilizers (Hungria et al. ). This symbiosis, observed in many leguminous, provides an increase in nitrogen availability in various agroecosystems (Egamberdieva et al. ). Consequently, numerous studies using different Bradyrhizobium strains are conducted worldwide, screening for new beneficial rhizobia applicable to soybean crops in agriculture (Ulzen 2016 et al. , Temesgen and Assefa ). Growing evidence indicates that other beneficial soil bacteria can positively affect rhizobia performance (Korir et al. ). Soybean inoculation with different rhizobacteria strains, mainly species from the genera Azospirillum, Bacillus , and Pseudomonas , in consortium with rhizobia, has been reported to promote plant growth and enhance crop yield (Zeffa et al. ). Additionally, this approach increases seed germination, nodulation, and nitrogen fixation (Aung et al. , Rechiatu et al. , Ulzen et al. ). Some studies have assessed the impact of combining inocula on plant growth, such as the co-inoculation of Bradyrhizobium japonicum and Pseudomonas striata . This combination resulted in a significant improvement in soybean growth and grain yield compared to the sole application of B. japonicum (Wasule et al. ). Additionally, the co-inoculation of Bacillus spp. with B. japonicum in soybean led to enhanced nodulation and nitrogen fixation, attributed to the formation of larger nodules (Sibponkrung et al. ). Plant growth-promoting rhizobacteria (PGPR) can directly facilitate plant growth through various mechanisms, including the production of siderophores, synthesis of phytohormones such as auxins, cytokinins, and gibberellins, and solubilization of nutrient minerals (Masciarelli et al. ). Since different strains show different effects on plant physiology, and they also vary in their symbiotic effectiveness with different cultivars, there is a need to unravel the mechanisms involved in the PGPR interaction with plants (Temesgen and Assefa ). Furthermore, recent studies involving the co-inoculation of two or more PGPRs have shown improved crop morphology and physiological structure, driven by the combined action of different PGPRs, such as Pseudomonas putida KT2440 and Sphingomonas sp. OF178, Azospirillum brasilense Sp7, and Acinetobacter sp. EMM02 (Molina-Romero et al. ), Pseudomonas stutzeri (E25) and Stenotrophomonas maltophilia (Rojas-Solís et al. ), and Azospirillum brasilense and Bradyrhizobium spp. (Barbosa et al. ). Bacillus thuringiensis (Bt) RZ2MS9, a PGPR isolated from guarana rhizosphere, has demonstrated significant effects on promoting soybean growth (Batista et al. ). Notably, the inoculation of Bt RZ2MS9 resulted in a substantial increase in the dry weight of both soybean shoots and roots compared to their noninoculated counterparts. Furthermore, Bt RZ2MS9 exhibits several plant growth-promoting traits, including the production of indole acetic acid (IAA), biological nitrogen fixation, and phosphate solubilization (Batista et al. ). One of the major Bt RZ2MS9 traits involved in the plant growth is the IAA production (Batista et al. , Figueredo et al. ). Although there is already substantial evidence of the benefits of using inoculants to promote the health and growth of plants, there is a growing interest in understanding the interaction of the inoculant with the soil microbiome. The soil microbiome comprises a complex and rich diversity of species, and the interactions among them play an essential role in plant health and productivity. As a result, there is increasing interest in research on beneficial PGPR strains and their diversity in soil for successful inoculation techniques (Philippot et al. , Jiménez et al. ). Trabelsi and Mhamdi highlighted the importance of evaluating impacts of microbial inoculants on soil microbial communities. They selected 17 studies significant on the theme, and resumed the impacts of inoculants on soil microbial community as nonconsistently changing the number and composition of the native taxonomic groups. Also, they highlighted the need for investigations of the complexity of the metabolic potentials of soil microbial communities. The studies available at the time used techniques with low power of discrimination of the soil microbial diversity such as denatirung gradiant gel electrophoresis (DGGE), Terminal restriction fragment length polymorphism (T-RFLP), and quantitative PCR (qPCR). More recently, Mawarda et al. revisited the theme, and after reviewing 108 studies, observed that 86% of them showed that inoculants modify soil microbial communities, highlighting the need for functional studies using multi-omics exploration. At their review, most of the studies used 16S rRNA sequencing to investigate the bacterial soil community. The relevance of studies of genetic potential of soil microbial communities after inoculations using whole sequencing metagenomics is also featured as a challenge for future studies (Wang et al. ). Thus, this study aimed to evaluate the impact of Bt RZ2MS9 and its co-inoculation with rhizobia on soybean growth, as well as on the diversity, community structure, and functional diversity and functional potential of soil natural communities in field conditions. Biological material The PGPR Bt RZ2MS9 was first isolated from the rhizosphere of the Amazon tree guarana plants ( Paullinia cupanea var. sorbilis) (Batista et al. ). It is stored in 20% glycerol at −80°C, at the Genetics of Microorganisms Laboratory, at ESALQ/USP, Piracicaba, SP, Brazil. Bt RZ2MS9 cultures were routinely obtained on Luria–Bertani (LB) medium (tryptone 10 g·l −1 , yeast extract 5 g·l −1 , and NaCl 10 g·l −1 ) at 28°C with 150 rpm agitation. We applied the commercial peat bioinoculant Masterfix® Soja for the co-inoculation study, which contains the rhizobia B. japonicum and Bradyrhizobium elkanii (SEMIA 5079 e SEMIA 5019, respectively). Seed treatment was performed according to the instructions provided by the manufacturer. Finally, the field study was conducted with the commercial soybean cultivar Potencia BMX (Brasmax Genetica, Brazil), responsive to inoculant for biological nitrogen fixation (Braccini et al. ). Experimental area characterization The field experiment was conducted from December 2018 to April 2019 in an area of 1 ha of the Anhumas São Paulo Uty Research Station, in Piracicaba, SP (latitude 22° 50′ 26″ south, longitude 48° 1′ 20″ west), Brazil. The experiment was installed in an area previously planted with soybeans (summer). Chemical and physical characterization of the soil in which soybean was cultivated are presented in . Bio-inoculum preparation and seeds treatment The Bt RZ2MS9 inoculum was prepared and transported at the same day to the experimental areas where seed bacterization was performed before seeding. The inoculum consisted of bacterial suspension in saline solution (∼1.10 8 CFU·ml −1 ), which was prepared by previously growing the bacterium in LB medium at 28°C with 150 rpm agitation and then measuring the optical density of the culture and adjusting the concentration. The inoculum dosage applied was 8 ml of the bio-inoculant for each 1 kg of seeds, which were dried in the shade before mechanical planting. To the negative control, the same procedure was performed, but using pure LB medium. The inoculation with the commercial rhizobia product Masterfix® Soja was performed according to the manufacturer’s instructions, diluting the peat product in saline solution to the final concentration of 1.10 8 CFU·ml −1 and directly applying the inoculant in the seeds. The material was also dried in the shade before seeding, which occurred 2 h after seed treatment for all inoculations tested. Field experiment The experiment was conducted in a strip design to have restricted areas of inoculant application in the field to avoid the spread among treatments in case of a smaller plot design. Replications were performed within each strip, with 20 sampling points being marked in the strips considering a 5-m border for both sides of treatments. The treatments were the control (nonbacterial inoculation), Bt (Bt RZ2MS9 inoculation) Bt_rhizobia (Bt RZ2MS9 + Masterfix Soja co-inoculation), and rhizobia (Masterfix Soja inoculation) . Mechanical seeding occurred on 28 November 2018, with soybean seeds planted at a depth of 3 cm along the experimental strips (40 rows wide, spaced by 45 cm, and 100 m in length). Prior to seeding, fertilizer Nutrisafra® 04-20-20 was applied. All treatments received the same crop treatment, which was performed with applications of the fungicide Approach®Prima (300 ml·ha −1 ) and the insecticide Belt® (70 ml·ha −1 ). Effects of bacterial inoculation on soybean growth promotion and productivity At the beginning of the flowering stage [R1—45 days after sowing (DAS)], we measured the plant height. Five plants were sampled per point, totalizing 100 plants per treatment measured from the base of the plant (on the ground) up to the apex of the main stem using a metric table, according to Rocha et al. . Crop lodging was assessed for each sampling point based on the average erectness of the main stem of plants at R8 (full maturity), according to Antwi-Boasiako . The rating system applies a scale from 1 to 5, with 1 = all plants erect, 2 = 25% of plants lodged, 3 = 50% of plants lodged, 4 = 75% of plants lodged, and 5 = all plants lodged. The soybean harvest was carried out on 3 April 2019. Each harvesting strip had previously been marked along with the 20 sampling points, and they consisted of two rows of plants with 5 m each, which were evaluated for total grain yield and 100-seed weight. Five plants from each sampling point were kept for measurements of dry mass, stem diameter, pod number, and seeds per pod for production estimates. Soybean seeds oil and protein content The percentage of oil and protein content in soybean seeds was measured through near-infrared (NIR) spectroscopy (Jiang ). This analysis was performed at the Laboratory of Applied Biotechnology for Plant Breeding at Universidade Estadual Paulista, Jaboticabal—SP, Brazil. Data gathering was performed with whole soybean seeds from each treatment, divided into 20 biological replicates and 3 technical replicates in Bruker® FT-NIR TANGO spectroscopy equipment. Soil sampling, DNA extraction, library construction, and data processing Soil sample for metagenomic analysis was collected at 20 sampling points within each treatment strip, at each time considered [Before—before sowing; CropR1—during crop development at R1 stage (45 DAS); and After—21 days after total harvesting of soybean plants (147 DAS)], respecting a 5-m border at each side of the strips. For each sampling point, 0–20 cm of soil was collected with the help of a soil probe. The material was immediately transported to the Laboratory of Genetics of Microorganisms at ESALQ/USP, Piracicaba, SP, Brazil, and stored at −80°C until DNA extraction. Soil collected was separated for DNA extraction as follows: for the time Before, 20 soil samples from the field area were grouped in one composite sample of 5 g and then in 4 compoused samples of 250 mg for DNA extractions. DNA extractions of samples from time CropR1 and After were performed for each treatment in 4 composite samples, mixed from 20 points of soil, each one with 5 g of soil. Total DNA extraction was performed using the DNeasy PowerSoil® Kit (Qiagen). The quality of the DNA was assessed using agarose gel electrophoresis, and the quantification was performed using a NanoDrop One and a fluorometer Qubit 4.0 with the kit DNA High Sensitivity (ThermoFisher). Fragment sizes were assessed with the Bioanalyser DNA (Agilent Technologies), applying the kit DNA HS 2100 (Agilent Technologies). Library was prepared with Nextera DNA Flex kit (Illumina). Samples were then sequenced with an Illumina NextSeq 550 platform for 300 base pairs readings (2×151 bp) (Illumina). FastQC and MultiQC were used to assess the quality of raw reads and to compile an integrated report, respectively. Sequence trimming was performed using Trimmomatic v0.33, where we set a minimum quality threshold of 20 phred (Bolger et al. ). Post-trimming, the taxonomic classification of the sequences was carried out using Kraken2 v2.1.3 (Wood and Salzberg ), leveraging the RefSeq NCBI Standard database provided at https://benlangmead.github.io/aws-indexes/k2 , dated 5 June 2023. The paired function was employed in Kraken2 for this classification. The functional annotation of filtered metagenomic sequences was performed using bash scripts in linux, with SUPERFOCUS (Silva et al. ) against the SEED Subsystem database (Overbeek et al. ). To identify genes associated with PGPR, we employed the PGPg_finder pipeline (Pellegrinetti et al. ), referencing the PLaBAse database (Patz et al. ), specifically utilizing the mgPGPT-db in FASTA format. In this process, metagenomic sequences were first paired using the PEAR software (Zhang et al. ), and then converted into protein sequences with Prodigal (Hyatt et al. ). These protein sequences were subsequently aligned using the DIAMOND program (Buchfink et al. ) and processed in R to generate an abundance table. Data analysis Field experiment data were statistically evaluated with ANOVA, followed by Tukey tests to compare the means obtained for each treatment. All analysis was performed in R software (R Core Team ), and the significance level adopted in all tests was .05. Soil microbiome diversity, considering taxonomy and functions, was analyzed using the microeco package v.1.1.0 in R software (v.4.2.1) (Liu et al. ). For alpha diversity, we evaluated observed genus richness, Shannon’s diversity index, and Simpson’s diversity index, all at the genus level. These indices were statistically compared using ANOVA followed by the Tukey honestly significant difference (HSD) test. All P values were set with a 95% confidence interval, and differences were considered significant when P < .05. For beta diversity, we employed NMDS (nonmetric multidimensional scaling) and PCoA (Principal Coordinates Analysis) based on the Bray–Curtis distance matrix of the soil samples, with statistical validation through PERmutational multivariate analysis of variance (PERMANOVA). PCoA was used when NMDS stress was insufficient. We also generated taxonomic summary bar charts to display the relative abundance ratio at the phylum level, emphasizing the top 12 taxa. Any taxa not within this top 12 were grouped under “others.” Genus differential abundance was determined using the paired comparison using Welch’s t -test in the STAMP v2.1.3, setting an alpha level of 0.05 and focusing on the taxonomic at the genus level. We assessed differential taxonomy across different phases of the experiment, separately examining the R1 phase and the experiment’s concluding phase. Concerning the PGPR genes, we assessed the differential abundance of genes using the same methodology as for taxonomic differential abundance. We identified genes specific to each phase (R1 and subsequent phases) for each inoculation. The differential PGPR gene abundance was evaluated in STAMP similarly as described earlier to differential genus abundance. The PGPR Bt RZ2MS9 was first isolated from the rhizosphere of the Amazon tree guarana plants ( Paullinia cupanea var. sorbilis) (Batista et al. ). It is stored in 20% glycerol at −80°C, at the Genetics of Microorganisms Laboratory, at ESALQ/USP, Piracicaba, SP, Brazil. Bt RZ2MS9 cultures were routinely obtained on Luria–Bertani (LB) medium (tryptone 10 g·l −1 , yeast extract 5 g·l −1 , and NaCl 10 g·l −1 ) at 28°C with 150 rpm agitation. We applied the commercial peat bioinoculant Masterfix® Soja for the co-inoculation study, which contains the rhizobia B. japonicum and Bradyrhizobium elkanii (SEMIA 5079 e SEMIA 5019, respectively). Seed treatment was performed according to the instructions provided by the manufacturer. Finally, the field study was conducted with the commercial soybean cultivar Potencia BMX (Brasmax Genetica, Brazil), responsive to inoculant for biological nitrogen fixation (Braccini et al. ). The field experiment was conducted from December 2018 to April 2019 in an area of 1 ha of the Anhumas São Paulo Uty Research Station, in Piracicaba, SP (latitude 22° 50′ 26″ south, longitude 48° 1′ 20″ west), Brazil. The experiment was installed in an area previously planted with soybeans (summer). Chemical and physical characterization of the soil in which soybean was cultivated are presented in . The Bt RZ2MS9 inoculum was prepared and transported at the same day to the experimental areas where seed bacterization was performed before seeding. The inoculum consisted of bacterial suspension in saline solution (∼1.10 8 CFU·ml −1 ), which was prepared by previously growing the bacterium in LB medium at 28°C with 150 rpm agitation and then measuring the optical density of the culture and adjusting the concentration. The inoculum dosage applied was 8 ml of the bio-inoculant for each 1 kg of seeds, which were dried in the shade before mechanical planting. To the negative control, the same procedure was performed, but using pure LB medium. The inoculation with the commercial rhizobia product Masterfix® Soja was performed according to the manufacturer’s instructions, diluting the peat product in saline solution to the final concentration of 1.10 8 CFU·ml −1 and directly applying the inoculant in the seeds. The material was also dried in the shade before seeding, which occurred 2 h after seed treatment for all inoculations tested. The experiment was conducted in a strip design to have restricted areas of inoculant application in the field to avoid the spread among treatments in case of a smaller plot design. Replications were performed within each strip, with 20 sampling points being marked in the strips considering a 5-m border for both sides of treatments. The treatments were the control (nonbacterial inoculation), Bt (Bt RZ2MS9 inoculation) Bt_rhizobia (Bt RZ2MS9 + Masterfix Soja co-inoculation), and rhizobia (Masterfix Soja inoculation) . Mechanical seeding occurred on 28 November 2018, with soybean seeds planted at a depth of 3 cm along the experimental strips (40 rows wide, spaced by 45 cm, and 100 m in length). Prior to seeding, fertilizer Nutrisafra® 04-20-20 was applied. All treatments received the same crop treatment, which was performed with applications of the fungicide Approach®Prima (300 ml·ha −1 ) and the insecticide Belt® (70 ml·ha −1 ). At the beginning of the flowering stage [R1—45 days after sowing (DAS)], we measured the plant height. Five plants were sampled per point, totalizing 100 plants per treatment measured from the base of the plant (on the ground) up to the apex of the main stem using a metric table, according to Rocha et al. . Crop lodging was assessed for each sampling point based on the average erectness of the main stem of plants at R8 (full maturity), according to Antwi-Boasiako . The rating system applies a scale from 1 to 5, with 1 = all plants erect, 2 = 25% of plants lodged, 3 = 50% of plants lodged, 4 = 75% of plants lodged, and 5 = all plants lodged. The soybean harvest was carried out on 3 April 2019. Each harvesting strip had previously been marked along with the 20 sampling points, and they consisted of two rows of plants with 5 m each, which were evaluated for total grain yield and 100-seed weight. Five plants from each sampling point were kept for measurements of dry mass, stem diameter, pod number, and seeds per pod for production estimates. The percentage of oil and protein content in soybean seeds was measured through near-infrared (NIR) spectroscopy (Jiang ). This analysis was performed at the Laboratory of Applied Biotechnology for Plant Breeding at Universidade Estadual Paulista, Jaboticabal—SP, Brazil. Data gathering was performed with whole soybean seeds from each treatment, divided into 20 biological replicates and 3 technical replicates in Bruker® FT-NIR TANGO spectroscopy equipment. Soil sample for metagenomic analysis was collected at 20 sampling points within each treatment strip, at each time considered [Before—before sowing; CropR1—during crop development at R1 stage (45 DAS); and After—21 days after total harvesting of soybean plants (147 DAS)], respecting a 5-m border at each side of the strips. For each sampling point, 0–20 cm of soil was collected with the help of a soil probe. The material was immediately transported to the Laboratory of Genetics of Microorganisms at ESALQ/USP, Piracicaba, SP, Brazil, and stored at −80°C until DNA extraction. Soil collected was separated for DNA extraction as follows: for the time Before, 20 soil samples from the field area were grouped in one composite sample of 5 g and then in 4 compoused samples of 250 mg for DNA extractions. DNA extractions of samples from time CropR1 and After were performed for each treatment in 4 composite samples, mixed from 20 points of soil, each one with 5 g of soil. Total DNA extraction was performed using the DNeasy PowerSoil® Kit (Qiagen). The quality of the DNA was assessed using agarose gel electrophoresis, and the quantification was performed using a NanoDrop One and a fluorometer Qubit 4.0 with the kit DNA High Sensitivity (ThermoFisher). Fragment sizes were assessed with the Bioanalyser DNA (Agilent Technologies), applying the kit DNA HS 2100 (Agilent Technologies). Library was prepared with Nextera DNA Flex kit (Illumina). Samples were then sequenced with an Illumina NextSeq 550 platform for 300 base pairs readings (2×151 bp) (Illumina). FastQC and MultiQC were used to assess the quality of raw reads and to compile an integrated report, respectively. Sequence trimming was performed using Trimmomatic v0.33, where we set a minimum quality threshold of 20 phred (Bolger et al. ). Post-trimming, the taxonomic classification of the sequences was carried out using Kraken2 v2.1.3 (Wood and Salzberg ), leveraging the RefSeq NCBI Standard database provided at https://benlangmead.github.io/aws-indexes/k2 , dated 5 June 2023. The paired function was employed in Kraken2 for this classification. The functional annotation of filtered metagenomic sequences was performed using bash scripts in linux, with SUPERFOCUS (Silva et al. ) against the SEED Subsystem database (Overbeek et al. ). To identify genes associated with PGPR, we employed the PGPg_finder pipeline (Pellegrinetti et al. ), referencing the PLaBAse database (Patz et al. ), specifically utilizing the mgPGPT-db in FASTA format. In this process, metagenomic sequences were first paired using the PEAR software (Zhang et al. ), and then converted into protein sequences with Prodigal (Hyatt et al. ). These protein sequences were subsequently aligned using the DIAMOND program (Buchfink et al. ) and processed in R to generate an abundance table. Field experiment data were statistically evaluated with ANOVA, followed by Tukey tests to compare the means obtained for each treatment. All analysis was performed in R software (R Core Team ), and the significance level adopted in all tests was .05. Soil microbiome diversity, considering taxonomy and functions, was analyzed using the microeco package v.1.1.0 in R software (v.4.2.1) (Liu et al. ). For alpha diversity, we evaluated observed genus richness, Shannon’s diversity index, and Simpson’s diversity index, all at the genus level. These indices were statistically compared using ANOVA followed by the Tukey honestly significant difference (HSD) test. All P values were set with a 95% confidence interval, and differences were considered significant when P < .05. For beta diversity, we employed NMDS (nonmetric multidimensional scaling) and PCoA (Principal Coordinates Analysis) based on the Bray–Curtis distance matrix of the soil samples, with statistical validation through PERmutational multivariate analysis of variance (PERMANOVA). PCoA was used when NMDS stress was insufficient. We also generated taxonomic summary bar charts to display the relative abundance ratio at the phylum level, emphasizing the top 12 taxa. Any taxa not within this top 12 were grouped under “others.” Genus differential abundance was determined using the paired comparison using Welch’s t -test in the STAMP v2.1.3, setting an alpha level of 0.05 and focusing on the taxonomic at the genus level. We assessed differential taxonomy across different phases of the experiment, separately examining the R1 phase and the experiment’s concluding phase. Concerning the PGPR genes, we assessed the differential abundance of genes using the same methodology as for taxonomic differential abundance. We identified genes specific to each phase (R1 and subsequent phases) for each inoculation. The differential PGPR gene abundance was evaluated in STAMP similarly as described earlier to differential genus abundance. Co-inoculation of soybean with Bt RZ2MS9 and rhizobia The control exhibited significantly lower plant height compared to other treatments. Bt showed the highest plant height values followed by Bt_rhizobia and rhizobia (Fig. ). The pod numbers were also higher in all inoculated treatments comparing with the control (Fig. ). The data did not show a significant variation in stem diameter (Fig. ), shoot dry mass (Fig. ), and plant lodging (Fig. ). Regarding the total grain yield, we observed a slightly higher grain yield for the Bt compared to both the control and the Bt_rhizobia. Interestingly, rhizobia had a lower grain yield (Fig. ), but a higher average weight of 100 seeds (Fig. ), which indicates that it produces less but bigger grains. Average total grain yield was used to estimate productivity in kg·ha −1 . Although, Bt did not differ statistically from the control or Bt_rhizobia in productivity, Bt promoted an increased ∼10% of productivity (Fig. ). No effect of inoculations was observed on oil and protein content from the seeds, with all treatments presenting very similar results . Bt_rhizobia co-inoculation had both the positive effects of rhizobia on pod number and higher 100-seed weight, but not lower productivity estimates nor lower total grain yield, showing the potential of this co-inoculation. Soil microbiome diversity and structure analysis Independent of the treatment, the alpha and beta taxonomic diversity measurements did not show significant variations, suggesting that the diversity of soil natural communities of Bacteria and Archaea was resistant to changes due to inoculation of Bt or rhizobia, in the time frame evaluated (Figs and and and ). Similarly, functional diversity followed a comparable pattern, with no significant differences in functional richness (Fig. ), with significant changes only when comparing functional Shannon diversity based on SEED features annotation of Bt 45 DAS (CropR1) and Bt 147 DAS (After) (Fig. ). This result indicates a functional effect on the community of inoculating Bt, on the first 45 days of inoculation, lost after plant removal and 147 DAS. Additionally, PCoA showed no significant differences in the functional structure among treatments in both the CropR1 and postharvest phases, indicating that the functional changes were restricted to a few functions (Fig. and ). The effect of Bt inoculation on soil functional Shannon diversity between CropR1 and After was evaluated using the complete functional annotation of the data. The function “quorum sensing and biofilm formation” was one of the functions significantly increased in CropR1 at 45 DAS but was reduced to the levels of the control natural community on After time, by 147 DAS (Fig. ). In CropR1, this function was also significantly increased in soil community when Bt inoculation was compared to Bt + rhizobia, rhizobia alone, and the control natural community (Fig. ). More specifically, within this functional class, the annotation for “N-acyl_homoserine_lactone_hydrolase” showed an increase. The analysis of the relative abundance of the bacterial and archaeal taxa at the phylum level showed high homogeneity among treatments, even when considering the time variable (Before, CropR1, and After). Taxonomic classification further revealed 59 phyla, 117 classes, 250 orders, 560 families, and 2029 genera. The dominant phyla were Proteobacteria (syn. Pseudomonadota) (46.56%), Actinobacteria (syn. Actinomycetota) (45.64%), Planctomycetes (Syn. Planctomycetota) (1.65%), Firmicutes (syn. Bacillota) (1.36%), Bacteroidota (1.01%), Euryarchaeota (0.91%), and Acidobacteria (0.88%) . Differential taxonomy and function abundance Considering the differential abundance of taxa, significant alterations in the soil microbial community in response to the inoculants were found, especially in Bt_rhizobia at CropR1. It was characterized by the presence of distinct genera such as Agromyces, Capillimicrobium, Luteitalea , and Anaeromyxobacter , each known for beneficial plant interactions. This diversity suggests a synergistic effect of Bt and rhizobia, potentially enhancing soybean growth and health. Comparing Bt, it exhibited a different microbial profile, with an increased presence of genera such as Gemmata and Frigoriglobus . Meanwhile, the rhizobia resulted in an increase in beneficial bacteria such as Streptomyces, Sorangium , and Anaeromyxobacter , some of their species known for their roles in promoting plant growth and soil health (Fig. ). From After samples, the microbial community in the Bt_rhizobia exhibited a diverse range of differential genera, including Capillimicrobium, Gottfriedia, Arthrobacter, Nitrospira , and Nordella . This diversity contrasts with the control, which maintained a more limited range of genera such as Mycobacterium, Nocardia, Gemmatirosa , and Gemmatimonas . The presence of Nitrospira , a known nitrifier, along with Arthrobacter and other beneficial microbes in the Bt_rhizobia group, suggests enhanced nitrogen cycling and other plant growth-promoting activities in the soil, which are crucial for soybean health and yield (Fig. ). Regarding the presence of plant-growth promoting functional potential, we observed that at CropR1, control soil samples were characterized by genes associated with stress response ( hyf R, pqq E), nitrogen metabolism ( ure E), and various central metabolic pathways ( gln E, yif K), reflecting the native functional capabilities of the soil microbiota. Upon Bt, a distinct enrichment of genes related to nutrient transport ( pst C), modulation of the nitrogen cycle ( nor Q), and carbon processing ( mmd A, fwd A) was observed. The Bt_rhizobia treatment further diversified the functional gene profile, with an abundance of genes implicated in phosphate mobilization ( pho A, pho D), nitrogen assimilation ( gln A), and carbohydrate metabolism ( dgo D), alongside genes linked to environmental stress resilience ( phy ). Similarly, rhizobia promoted genes beneficial for phosphate mobilization and nitrogen fixation ( nif A, pho A, pho D), as well as those involved in sulfur assimilation ( cys A) and urea hydrolysis ( ure C) (Fig. ). Following the harvest, a persistent alteration in the soil metagenome was evident. Control samples continued to show an abundance of genes central to metabolic integrity and nutrient cycling. In contrast, Bt-inoculated soils exhibited genes that could potentially influence postharvest nitrogen cycling ( sfn G), microbial community structure through biofilm regulation ( exo R), and phosphate transport ( pst P). Notably, the Bt_rhizobia treatment demonstrated a wide array of functional genes, including those related to complex organic compound degradation ( ssu D), response regulation ( pho P), and atrazine degradation ( atz F), suggesting a long-term effect on the soil’s capacity for self-renewal and environmental detoxification. Rhizobia treatment maintains an abundance of genes that may enhance nitrogen utilization ( urt A) and provide environmental stress resilience (K04618), possibly aiding in soil restoration for future crop cycles. The control exhibited significantly lower plant height compared to other treatments. Bt showed the highest plant height values followed by Bt_rhizobia and rhizobia (Fig. ). The pod numbers were also higher in all inoculated treatments comparing with the control (Fig. ). The data did not show a significant variation in stem diameter (Fig. ), shoot dry mass (Fig. ), and plant lodging (Fig. ). Regarding the total grain yield, we observed a slightly higher grain yield for the Bt compared to both the control and the Bt_rhizobia. Interestingly, rhizobia had a lower grain yield (Fig. ), but a higher average weight of 100 seeds (Fig. ), which indicates that it produces less but bigger grains. Average total grain yield was used to estimate productivity in kg·ha −1 . Although, Bt did not differ statistically from the control or Bt_rhizobia in productivity, Bt promoted an increased ∼10% of productivity (Fig. ). No effect of inoculations was observed on oil and protein content from the seeds, with all treatments presenting very similar results . Bt_rhizobia co-inoculation had both the positive effects of rhizobia on pod number and higher 100-seed weight, but not lower productivity estimates nor lower total grain yield, showing the potential of this co-inoculation. Independent of the treatment, the alpha and beta taxonomic diversity measurements did not show significant variations, suggesting that the diversity of soil natural communities of Bacteria and Archaea was resistant to changes due to inoculation of Bt or rhizobia, in the time frame evaluated (Figs and and and ). Similarly, functional diversity followed a comparable pattern, with no significant differences in functional richness (Fig. ), with significant changes only when comparing functional Shannon diversity based on SEED features annotation of Bt 45 DAS (CropR1) and Bt 147 DAS (After) (Fig. ). This result indicates a functional effect on the community of inoculating Bt, on the first 45 days of inoculation, lost after plant removal and 147 DAS. Additionally, PCoA showed no significant differences in the functional structure among treatments in both the CropR1 and postharvest phases, indicating that the functional changes were restricted to a few functions (Fig. and ). The effect of Bt inoculation on soil functional Shannon diversity between CropR1 and After was evaluated using the complete functional annotation of the data. The function “quorum sensing and biofilm formation” was one of the functions significantly increased in CropR1 at 45 DAS but was reduced to the levels of the control natural community on After time, by 147 DAS (Fig. ). In CropR1, this function was also significantly increased in soil community when Bt inoculation was compared to Bt + rhizobia, rhizobia alone, and the control natural community (Fig. ). More specifically, within this functional class, the annotation for “N-acyl_homoserine_lactone_hydrolase” showed an increase. The analysis of the relative abundance of the bacterial and archaeal taxa at the phylum level showed high homogeneity among treatments, even when considering the time variable (Before, CropR1, and After). Taxonomic classification further revealed 59 phyla, 117 classes, 250 orders, 560 families, and 2029 genera. The dominant phyla were Proteobacteria (syn. Pseudomonadota) (46.56%), Actinobacteria (syn. Actinomycetota) (45.64%), Planctomycetes (Syn. Planctomycetota) (1.65%), Firmicutes (syn. Bacillota) (1.36%), Bacteroidota (1.01%), Euryarchaeota (0.91%), and Acidobacteria (0.88%) . Considering the differential abundance of taxa, significant alterations in the soil microbial community in response to the inoculants were found, especially in Bt_rhizobia at CropR1. It was characterized by the presence of distinct genera such as Agromyces, Capillimicrobium, Luteitalea , and Anaeromyxobacter , each known for beneficial plant interactions. This diversity suggests a synergistic effect of Bt and rhizobia, potentially enhancing soybean growth and health. Comparing Bt, it exhibited a different microbial profile, with an increased presence of genera such as Gemmata and Frigoriglobus . Meanwhile, the rhizobia resulted in an increase in beneficial bacteria such as Streptomyces, Sorangium , and Anaeromyxobacter , some of their species known for their roles in promoting plant growth and soil health (Fig. ). From After samples, the microbial community in the Bt_rhizobia exhibited a diverse range of differential genera, including Capillimicrobium, Gottfriedia, Arthrobacter, Nitrospira , and Nordella . This diversity contrasts with the control, which maintained a more limited range of genera such as Mycobacterium, Nocardia, Gemmatirosa , and Gemmatimonas . The presence of Nitrospira , a known nitrifier, along with Arthrobacter and other beneficial microbes in the Bt_rhizobia group, suggests enhanced nitrogen cycling and other plant growth-promoting activities in the soil, which are crucial for soybean health and yield (Fig. ). Regarding the presence of plant-growth promoting functional potential, we observed that at CropR1, control soil samples were characterized by genes associated with stress response ( hyf R, pqq E), nitrogen metabolism ( ure E), and various central metabolic pathways ( gln E, yif K), reflecting the native functional capabilities of the soil microbiota. Upon Bt, a distinct enrichment of genes related to nutrient transport ( pst C), modulation of the nitrogen cycle ( nor Q), and carbon processing ( mmd A, fwd A) was observed. The Bt_rhizobia treatment further diversified the functional gene profile, with an abundance of genes implicated in phosphate mobilization ( pho A, pho D), nitrogen assimilation ( gln A), and carbohydrate metabolism ( dgo D), alongside genes linked to environmental stress resilience ( phy ). Similarly, rhizobia promoted genes beneficial for phosphate mobilization and nitrogen fixation ( nif A, pho A, pho D), as well as those involved in sulfur assimilation ( cys A) and urea hydrolysis ( ure C) (Fig. ). Following the harvest, a persistent alteration in the soil metagenome was evident. Control samples continued to show an abundance of genes central to metabolic integrity and nutrient cycling. In contrast, Bt-inoculated soils exhibited genes that could potentially influence postharvest nitrogen cycling ( sfn G), microbial community structure through biofilm regulation ( exo R), and phosphate transport ( pst P). Notably, the Bt_rhizobia treatment demonstrated a wide array of functional genes, including those related to complex organic compound degradation ( ssu D), response regulation ( pho P), and atrazine degradation ( atz F), suggesting a long-term effect on the soil’s capacity for self-renewal and environmental detoxification. Rhizobia treatment maintains an abundance of genes that may enhance nitrogen utilization ( urt A) and provide environmental stress resilience (K04618), possibly aiding in soil restoration for future crop cycles. Inoculants and plant growth promotion Soybean [ Glycine max (L.) Merr.] stands out as one of the globally predominant crops employing inoculants, primarily relying on a variety of bacteria from the genus Bradyrhizobium (Santos et al. ). Also, the co-inoculation of rhizobia in consortium with other PGPR significantly improved soybean growth and grain yield compared to the sole application of rhizobia (Wasule et al. ). The bacterial strain studied here, Bt RZ2MS9, has already demonstrated positive results when inoculated in soybean and maize, as well as their ability to colonize maize endophytically (Batista et al. , Almeida et al. ). It is possible that this bacterium is also endophytic in soybean, which could explain its role in promoting plant height growth. Ferrarezi et al. showed previously the effect of this strain on maize rhizobiome in field condition. Considering the potential of this strain as a bioinoculant, this study presents the first evaluation of the co-inoculation of rhizobia and Bt RZ2MS9 and its effects on soybean, as well as on the soil bacterial diversity and functional potential in field conditions. In PGPR, mechanisms related to the plant growth-promoting effect involve biological processes such as IAA production, phosphate solubilization, and urease activities, exerting a direct impact on the nutrient and water uptake by the plant (Khan et al. ). A previous study with Bt RZ2MS9 demonstrated its capability to produce IAA in the presence of l -tryptophan (Batista et al. , Figueredo et al. ), possibly attributed to the strain’s ability to utilize l -tryptophan as a physiological precursor (Spaepen et al. ). Several strains of B. thuringiensis have been used to promote plant growth, and the findings of this study align with previous reports (Vidal-Quist et al. , Tagele et al. , Viljoen et al. , Jo et al. ). In previous studies, soybean inoculation with Bt RZ2MS9 resulted in increased plant growth (Batista et al. ). The average shoot length of treatments inoculated and co-inoculated with this strain was greater than the control, but there was no significant effect on shoot dry mass, stem diameter, or productivity. Even though rhizobacteria from the genus Bacillus are commonly observed to interact positively with plants, different species and strains may have varying effects on other aspects of plant growth. The PGPR can produce phytohormones, improve drought resistance, and suppress pathogens, but some of these attributes may not be directly correlated with significant increases in grain yield under field conditions (Elkoca et al. ; Tsigie et al. ). Experiments involving different plant species and varying environmental conditions may reveal different plant growth-promoting features and productivity results. Similarly, Bai et al. evaluated Bt A5-BRSC inoculation on the development of okra. Their results showed a significant increase in seed germination, shoot height, root length, leaf diameter, vigor index, fruit weight, seed weight, and total fresh weight as well as dry weight of inoculated plants in comparison to the control. Hungria et al. observed an increase of 420 kg·ha −1 (16.1%) in soybean production co-inoculated with B. japonicum and A. brasilense compared to control treatment inoculated only with B. japonicum . However, Zuffo et al. reported no significant differences in productivity in soybean co-inoculated with B. japonicum and A. brasilense , and the control group only inoculated with the former bacterium. A study with Bacillus subtilis in co-inoculation with B. japonicum in soybean by Atieno et al. showed increased soybean nodulation and biomass traits. Thus, what is not clear is the impact of co-inoculation on soybean grain yield (Zeffa et al. ). Interestingly, we observed an increase in pod number in all inoculated treatments: Bt, Bt_rhizobia, and only rhizobia, compared to the control. Bioinoculants may influence soil nutrient availability to the plant, thereby impacting grain production, and such differences are observed based on the type of formulation used for crop inoculation (Maitra et al. ). The increase in pod number follows a 100-seed weight increase in the treatments that had rhizobia applied (alone or in co-inoculation). This shows that rhizobia stimulates both pods and grain weight increase, also observed by Azfal et al. . Bt alone, however, only promoted an increase in pod number, indicating that rhizobia biological fixation is mostly the reason of the changes. The protein and oil content of soybean seeds in this study did not vary among treatments. However, Sheteiwy et al. tested the effect of co-inoculation of Bacillus amyloliquefaciens and mycorrhiza on soybeans under drought stress and they observed an increase of protein and oil content in seeds from inoculated plants cultivated under drought stress compared with to the control. Yasmin et al. observed the same results of increased oil and protein content when testing the co-inoculation effects of Pseudomonas pseudoalcaligenes and B. subtilis in soybean under salinity stress. Therefore, the inoculation of both bacteria tested in this study under the same conditions of salinity and irrigation may not have shown a potential protective effect of these rhizobacteria against drought and salinity stresses, which can be assessed with different experimental conditions. Besides, Barbosa et al. showed that other variables, such as soybean growth habit, climate, soil texture, and management system, affect co-inoculation results, and thus they should be considered in determining the inoculation strategy to be applied. Thus, further experimentation considering different experimental conditions or plant species can reveal other potential benefits from the use of Bt RZ2MS9 in co-inoculation strategies. Inoculants and soil prokaryotic community structure One important factor that affects the efficacy of soil microbial inoculants is the competition of inoculated microorganisms with the native soil microbiota (Kaminsky et al. ). In this study, inoculation with Bt RZ2MS9 and rhizobia exhibited minimal interference on the native soil taxonomic diversity. For bacterial composition, the phyla Proteobacteria (syn. Pseudomonadota) and Actinobacteria (syn. Actinomycetota) were predominant in all soil samples. Both phyla are commonly found in soil and can be associated with plant growth promotion by mechanisms such as facilitating the degradation of aminocyclopropane carboxylate and contributing to the suppression of root diseases (Jorquera et al. , Zhang et al. ). Analysis of percentage abundance and beta diversity over time did not reveal clear impacts of either sole inoculations or co-inoculation on bacterial taxonomic diversity. Considering that community beta diversity in this study was not impacted by the inoculations over time, the application of Bt RZ2MS9 and rhizobia seems to be safe for environmental application, from the taxonomic perspective. Further testing is needed to reach a final conclusion, including longer time frames and various environmental conditions. Moreover, changes in soil bacterial community structure due to the inoculation of a Bt strain were reported by Jo et al. , and such effect also occurred after 6 weeks of inoculation, consistent with the findings reported here. In this study, soil sampling for diversity analysis during crop development occurred 45 DAS the inoculated seeds, and sampling after harvest occurred 147 DAS. This emphasizes the importance of future analysis on the long-term impacts of Bt RZ2MS9 inoculation on soil bacteria diversity. Even though we did not see a change in the structure of the soil microbial community, some taxa were differently affected by the inoculation treatments. For example, the genus Ralstonia was the only one with reduced relative abundance in treatments inoculated with Bt RZ2MS9 or rhizobia, or the combination of both, in CropR1, compared to the control. Also, the genus Gottfriedia was the only consistently enriched in relative abundance in the soil, apart the inoculation performed, in the After moment, compared to the control. Ralstonia is found in soils and includes various species of Gram-negative, non-spore-forming bacteria, some of them are plant pathogens (Peeters et al. ). Gottfriedia is a genus previously known as part of Bacillus , with many species agronomic relevant (Gupta et al. ). The two genera can potentially act as bioindicators of inoculation since they are sensible to the presence of the inoculants studied. Bioindicators can be used as a metric in determining soil functionality, useful to measure soil quality, restoration, and resilience, concerning both agriculture and the environment (Bhaduri et al. ). The mechanisms explaining the increase in relative abundance of Ralstonia and the decrease in Gottfriedia can be various, such as competition and collaboration with soil native microbes, and to understand the ecology of the inoculants is necessary to discuss soil microbial ecology. Another genus that showed a consistent responsive behavior to inoculants was Mycobacterium . Mycobacteria, a diverse and ubiquitous group of Actinobacteria, includes species that are significant pathogens and are prevalent in a wide range of habitats, including soil and aquatic environments (Walsh et al. ). In all situations where rhizobium was inoculated, either alone or in combination with Bt RZ2MS9, Mycobacterium consistently showed a reduction in relative abundance during both evaluated time points. In contrast, the inoculation of Bt RZ2MS9 alone did not induce significant changes in Mycobacterium relative abundance, indicating a specific responsiveness to rhizobium. Inoculants and microbiota functional potential The functional diversity measured using the Shannon index on the functional annotation shows a decrease in Bt CropR1 (45 DAS) compared to Bt in After (147 DAS). Among the functions that change between the two time points, the relative abundance of N-acyl homoserine lactone hydrolase increases in Bt CropR1, the period where the total functional diversity is at its lowest. This enzyme hydrolyzes the ester bond of the homoserine lactone ring of N-acyl- l -homoserine lactones, key bacterial quorum sensing regulator, rendering the signaling molecules incapable of binding to their target transcriptional regulators and thus blocking microbial quorum sensing (Kim et al. ). Bt RZ2MS9 carries out the gene aiiA , encoder of acyl homoserine lactonase (Bonatelli et al. ). Bt inoculation can potentially disrupt quorum sensing in the soil bacterial community, thereby reducing the Shannon functional diversity at 45 DAS. However, soil functional diversity returned to natural levels at 147 DAS, demonstrating the microbial resilience. The wide distribution of N-acyl homoserine lactone-degrading enzymes in B. thuringiensis is well documented (Lee et al. ), and its quorum quenching action was previously observed in co-inoculation with PGPR (Rosier et al. ), but when compared to isolated bacteria. This study is the first to report that Bt inoculation in soil can influence functional diversity and that functional diversity can return to previous levels after days to weeks of bacterial inoculation. The genetic markers related to plant growth promotion that are enriched after inoculating Bt RZ2MS9 do not show equal enrichment when rhizobia is co-inoculated. This disparity may be attributed to stronger interactions between rhizobia and the soil’s natural community. Conversely, functions related to phosphorus (alkaline phosphatase) and carbon (galacto dehydratase) cycling are enriched in soils inoculated with rhizobia alone or combined with Bt RZ2MS9. The soils inoculated with rhizobia, isolated or in association with Bt, had genes enriched in relative abundance, compared to the control. Most of these genes were directly related to phosphorus metabolism ( pho A and pho D), but the highest increase was of a quinoprotein glucose dehydrogenase ( gcd ). Soil microbes solubilize mineral phosphates by secreting gluconic acid, among other acids. The gluconic acid is produced from glucose by quinoprotein glucose dehydrogenase (EC1.1.5.2, GDH) (An et al. ). The inoculation of Bt RZ2MS9 and rhizobia promoted some parameters involved in soybean growth in height, whether applied alone or in co-inoculation. The native soil prokaryotic microbiome showed no significant influence on both microbial diversity and community structure, but Bt inoculation influenced functional diversity. The genera Agromyces, Capillimicrobium, Luteitalea , and Anaeromyxobacter consistently increased in relative abundance after the co-inoculation of Bt RZ2MS9 and rhizobia. These genera potentially serve as bioindicators of the presence of inoculants. The genes enriched after co-inoculation were mostly related to phosphorus cycling in the soil. The most pronounced increase was observed in the gcd gene, indicating the release of gluconic acid and phosphorus solubilization as a potentially relevant pathway to promote plant nutrition and growth. The nif A genes increased only when rhizobia were inoculated alone, highlighting the need for a better understanding of the impacts of co-inoculation with Bt RZ2MS9 on nitrogen fixation outside plant nodules. Microbial interactions in soil are complex, and despite the inoculation of foreign bacteria does not harm community structure and diversity, it can influence specific native microbial relationships and affect functional diversity. Soybean [ Glycine max (L.) Merr.] stands out as one of the globally predominant crops employing inoculants, primarily relying on a variety of bacteria from the genus Bradyrhizobium (Santos et al. ). Also, the co-inoculation of rhizobia in consortium with other PGPR significantly improved soybean growth and grain yield compared to the sole application of rhizobia (Wasule et al. ). The bacterial strain studied here, Bt RZ2MS9, has already demonstrated positive results when inoculated in soybean and maize, as well as their ability to colonize maize endophytically (Batista et al. , Almeida et al. ). It is possible that this bacterium is also endophytic in soybean, which could explain its role in promoting plant height growth. Ferrarezi et al. showed previously the effect of this strain on maize rhizobiome in field condition. Considering the potential of this strain as a bioinoculant, this study presents the first evaluation of the co-inoculation of rhizobia and Bt RZ2MS9 and its effects on soybean, as well as on the soil bacterial diversity and functional potential in field conditions. In PGPR, mechanisms related to the plant growth-promoting effect involve biological processes such as IAA production, phosphate solubilization, and urease activities, exerting a direct impact on the nutrient and water uptake by the plant (Khan et al. ). A previous study with Bt RZ2MS9 demonstrated its capability to produce IAA in the presence of l -tryptophan (Batista et al. , Figueredo et al. ), possibly attributed to the strain’s ability to utilize l -tryptophan as a physiological precursor (Spaepen et al. ). Several strains of B. thuringiensis have been used to promote plant growth, and the findings of this study align with previous reports (Vidal-Quist et al. , Tagele et al. , Viljoen et al. , Jo et al. ). In previous studies, soybean inoculation with Bt RZ2MS9 resulted in increased plant growth (Batista et al. ). The average shoot length of treatments inoculated and co-inoculated with this strain was greater than the control, but there was no significant effect on shoot dry mass, stem diameter, or productivity. Even though rhizobacteria from the genus Bacillus are commonly observed to interact positively with plants, different species and strains may have varying effects on other aspects of plant growth. The PGPR can produce phytohormones, improve drought resistance, and suppress pathogens, but some of these attributes may not be directly correlated with significant increases in grain yield under field conditions (Elkoca et al. ; Tsigie et al. ). Experiments involving different plant species and varying environmental conditions may reveal different plant growth-promoting features and productivity results. Similarly, Bai et al. evaluated Bt A5-BRSC inoculation on the development of okra. Their results showed a significant increase in seed germination, shoot height, root length, leaf diameter, vigor index, fruit weight, seed weight, and total fresh weight as well as dry weight of inoculated plants in comparison to the control. Hungria et al. observed an increase of 420 kg·ha −1 (16.1%) in soybean production co-inoculated with B. japonicum and A. brasilense compared to control treatment inoculated only with B. japonicum . However, Zuffo et al. reported no significant differences in productivity in soybean co-inoculated with B. japonicum and A. brasilense , and the control group only inoculated with the former bacterium. A study with Bacillus subtilis in co-inoculation with B. japonicum in soybean by Atieno et al. showed increased soybean nodulation and biomass traits. Thus, what is not clear is the impact of co-inoculation on soybean grain yield (Zeffa et al. ). Interestingly, we observed an increase in pod number in all inoculated treatments: Bt, Bt_rhizobia, and only rhizobia, compared to the control. Bioinoculants may influence soil nutrient availability to the plant, thereby impacting grain production, and such differences are observed based on the type of formulation used for crop inoculation (Maitra et al. ). The increase in pod number follows a 100-seed weight increase in the treatments that had rhizobia applied (alone or in co-inoculation). This shows that rhizobia stimulates both pods and grain weight increase, also observed by Azfal et al. . Bt alone, however, only promoted an increase in pod number, indicating that rhizobia biological fixation is mostly the reason of the changes. The protein and oil content of soybean seeds in this study did not vary among treatments. However, Sheteiwy et al. tested the effect of co-inoculation of Bacillus amyloliquefaciens and mycorrhiza on soybeans under drought stress and they observed an increase of protein and oil content in seeds from inoculated plants cultivated under drought stress compared with to the control. Yasmin et al. observed the same results of increased oil and protein content when testing the co-inoculation effects of Pseudomonas pseudoalcaligenes and B. subtilis in soybean under salinity stress. Therefore, the inoculation of both bacteria tested in this study under the same conditions of salinity and irrigation may not have shown a potential protective effect of these rhizobacteria against drought and salinity stresses, which can be assessed with different experimental conditions. Besides, Barbosa et al. showed that other variables, such as soybean growth habit, climate, soil texture, and management system, affect co-inoculation results, and thus they should be considered in determining the inoculation strategy to be applied. Thus, further experimentation considering different experimental conditions or plant species can reveal other potential benefits from the use of Bt RZ2MS9 in co-inoculation strategies. One important factor that affects the efficacy of soil microbial inoculants is the competition of inoculated microorganisms with the native soil microbiota (Kaminsky et al. ). In this study, inoculation with Bt RZ2MS9 and rhizobia exhibited minimal interference on the native soil taxonomic diversity. For bacterial composition, the phyla Proteobacteria (syn. Pseudomonadota) and Actinobacteria (syn. Actinomycetota) were predominant in all soil samples. Both phyla are commonly found in soil and can be associated with plant growth promotion by mechanisms such as facilitating the degradation of aminocyclopropane carboxylate and contributing to the suppression of root diseases (Jorquera et al. , Zhang et al. ). Analysis of percentage abundance and beta diversity over time did not reveal clear impacts of either sole inoculations or co-inoculation on bacterial taxonomic diversity. Considering that community beta diversity in this study was not impacted by the inoculations over time, the application of Bt RZ2MS9 and rhizobia seems to be safe for environmental application, from the taxonomic perspective. Further testing is needed to reach a final conclusion, including longer time frames and various environmental conditions. Moreover, changes in soil bacterial community structure due to the inoculation of a Bt strain were reported by Jo et al. , and such effect also occurred after 6 weeks of inoculation, consistent with the findings reported here. In this study, soil sampling for diversity analysis during crop development occurred 45 DAS the inoculated seeds, and sampling after harvest occurred 147 DAS. This emphasizes the importance of future analysis on the long-term impacts of Bt RZ2MS9 inoculation on soil bacteria diversity. Even though we did not see a change in the structure of the soil microbial community, some taxa were differently affected by the inoculation treatments. For example, the genus Ralstonia was the only one with reduced relative abundance in treatments inoculated with Bt RZ2MS9 or rhizobia, or the combination of both, in CropR1, compared to the control. Also, the genus Gottfriedia was the only consistently enriched in relative abundance in the soil, apart the inoculation performed, in the After moment, compared to the control. Ralstonia is found in soils and includes various species of Gram-negative, non-spore-forming bacteria, some of them are plant pathogens (Peeters et al. ). Gottfriedia is a genus previously known as part of Bacillus , with many species agronomic relevant (Gupta et al. ). The two genera can potentially act as bioindicators of inoculation since they are sensible to the presence of the inoculants studied. Bioindicators can be used as a metric in determining soil functionality, useful to measure soil quality, restoration, and resilience, concerning both agriculture and the environment (Bhaduri et al. ). The mechanisms explaining the increase in relative abundance of Ralstonia and the decrease in Gottfriedia can be various, such as competition and collaboration with soil native microbes, and to understand the ecology of the inoculants is necessary to discuss soil microbial ecology. Another genus that showed a consistent responsive behavior to inoculants was Mycobacterium . Mycobacteria, a diverse and ubiquitous group of Actinobacteria, includes species that are significant pathogens and are prevalent in a wide range of habitats, including soil and aquatic environments (Walsh et al. ). In all situations where rhizobium was inoculated, either alone or in combination with Bt RZ2MS9, Mycobacterium consistently showed a reduction in relative abundance during both evaluated time points. In contrast, the inoculation of Bt RZ2MS9 alone did not induce significant changes in Mycobacterium relative abundance, indicating a specific responsiveness to rhizobium. The functional diversity measured using the Shannon index on the functional annotation shows a decrease in Bt CropR1 (45 DAS) compared to Bt in After (147 DAS). Among the functions that change between the two time points, the relative abundance of N-acyl homoserine lactone hydrolase increases in Bt CropR1, the period where the total functional diversity is at its lowest. This enzyme hydrolyzes the ester bond of the homoserine lactone ring of N-acyl- l -homoserine lactones, key bacterial quorum sensing regulator, rendering the signaling molecules incapable of binding to their target transcriptional regulators and thus blocking microbial quorum sensing (Kim et al. ). Bt RZ2MS9 carries out the gene aiiA , encoder of acyl homoserine lactonase (Bonatelli et al. ). Bt inoculation can potentially disrupt quorum sensing in the soil bacterial community, thereby reducing the Shannon functional diversity at 45 DAS. However, soil functional diversity returned to natural levels at 147 DAS, demonstrating the microbial resilience. The wide distribution of N-acyl homoserine lactone-degrading enzymes in B. thuringiensis is well documented (Lee et al. ), and its quorum quenching action was previously observed in co-inoculation with PGPR (Rosier et al. ), but when compared to isolated bacteria. This study is the first to report that Bt inoculation in soil can influence functional diversity and that functional diversity can return to previous levels after days to weeks of bacterial inoculation. The genetic markers related to plant growth promotion that are enriched after inoculating Bt RZ2MS9 do not show equal enrichment when rhizobia is co-inoculated. This disparity may be attributed to stronger interactions between rhizobia and the soil’s natural community. Conversely, functions related to phosphorus (alkaline phosphatase) and carbon (galacto dehydratase) cycling are enriched in soils inoculated with rhizobia alone or combined with Bt RZ2MS9. The soils inoculated with rhizobia, isolated or in association with Bt, had genes enriched in relative abundance, compared to the control. Most of these genes were directly related to phosphorus metabolism ( pho A and pho D), but the highest increase was of a quinoprotein glucose dehydrogenase ( gcd ). Soil microbes solubilize mineral phosphates by secreting gluconic acid, among other acids. The gluconic acid is produced from glucose by quinoprotein glucose dehydrogenase (EC1.1.5.2, GDH) (An et al. ). The inoculation of Bt RZ2MS9 and rhizobia promoted some parameters involved in soybean growth in height, whether applied alone or in co-inoculation. The native soil prokaryotic microbiome showed no significant influence on both microbial diversity and community structure, but Bt inoculation influenced functional diversity. The genera Agromyces, Capillimicrobium, Luteitalea , and Anaeromyxobacter consistently increased in relative abundance after the co-inoculation of Bt RZ2MS9 and rhizobia. These genera potentially serve as bioindicators of the presence of inoculants. The genes enriched after co-inoculation were mostly related to phosphorus cycling in the soil. The most pronounced increase was observed in the gcd gene, indicating the release of gluconic acid and phosphorus solubilization as a potentially relevant pathway to promote plant nutrition and growth. The nif A genes increased only when rhizobia were inoculated alone, highlighting the need for a better understanding of the impacts of co-inoculation with Bt RZ2MS9 on nitrogen fixation outside plant nodules. Microbial interactions in soil are complex, and despite the inoculation of foreign bacteria does not harm community structure and diversity, it can influence specific native microbial relationships and affect functional diversity. fiaf013_Supplemental_Files |
The Impact of Nanoparticles and Molecular Forms of TiO | 3ab9e234-f316-436b-8a65-9a9dbfe4b6b6 | 11766111 | Microbiology[mh] | The development of nanotechnology has resulted in intense pressure on the environment, particularly through the increasing production and application of metal nanoparticles in industries such as pharmaceuticals, medicine, chemicals, automotive, and agriculture. These nanoparticles, including titanium dioxide (TiO 2 ), are widely valued for their antimicrobial properties, which enable their use as pesticides in crop production. However, the environmental impact of these nanoparticles, especially in terms of waste and ecological interactions, remains a subject of ongoing study. Research has shown that the size and chemical properties of nanoparticles can lead to varying environmental effects, necessitating detailed characterization of these interactions . Intensive crop production is accompanied by the widespread use of agrochemicals, which maximizes crop yield and quality while exerting negative environmental impacts. Major advances have been made in cultivation, fertilization, and protection technologies for food crops, but not for fodder crops. The production methods and fertilization strategies for fodder crops, including forage grasses, are still insufficiently developed. Conventional disease, pest, and weed control in fodder crop production is nearly impossible. Therefore, alternative management options are being sought to optimize yields and improve crop quality. Nanotechnology could support forage grass production, with novel nanomaterials such as TiO 2 nanoparticles applied as biostimulants to reduce the use of fertilizers and crop protection products without compromising efficiency. These nanoparticles also hold potential for enhancing the microbiological and chemical quality of forage grasses . Titanium dioxide nanoparticles (TiO 2 NPs) have demonstrated both stimulatory and inhibitory effects on microorganisms, including bacteria and fungi . At the cellular level, these nanoparticles can induce toxicity depending on factors such as size, dose, charge, and exposure time. Smaller nanoparticles (~10 nm) are particularly problematic, as they can penetrate cell membranes, disrupt cellular functions, and reduce chlorophyll synthesis in plants. Additionally, TiO 2 NPs can generate reactive oxygen species (ROS), damage DNA, and disrupt key metabolic processes, as evidenced by laboratory studies on mammals . In soil environments, TiO 2 NPs influence microbial communities and their functions. Concentrations of these nanoparticles can alter bacterial abundance and metabolism, impacting processes like denitrification, sulphur oxidation, and nitrogen cycling. Similarly, their interactions with soil fungi can result in shifts in fungal diversity, including changes in dominant taxa and reductions in phytopathogenic species. While some studies highlight their potential to promote mycorrhizal fungi and plant-beneficial microorganisms, others report contrasting outcomes, reflecting inconsistencies in current research . For example, Asadishad et al. found that low TiO 2 nanoparticle concentrations did not affect soil enzyme activity, whereas higher doses (≥100 mg kg −1 ) were detrimental. Similarly, Moll et al. observed that high doses (1 g kg −1 ) could enhance nitrogen fixation but impair ammonia oxidation and nitrification after prolonged exposure. These findings underscore the complexity of nanoparticle interactions, with factors like particle charge further modulating their effects on rhizosphere dynamics and nutrient delivery . Despite extensive research, the effects of TiO 2 NPs on rhizosphere microbiota remain unclear due to conflicting results. While some nanoparticles exhibit biostimulatory effects on microorganisms and support mycorrhization processes, others disrupt microbial communities. Furthermore, most studies focus on small nanoparticles (10–50 nm), with limited data on larger aggregates (≥100 nm), which are more representative of environmental conditions. Larger nanoparticles, while potentially less toxic, remain understudied in terms of their effects on living organisms . This study aims to bridge these knowledge gaps by investigating the effects of TiO 2 NPs with varying sizes, including larger aggregates, on soil microbiome structure and function. The research evaluates changes in microbial physiology, biochemical traits, and their implications for plant growth and pathogen suppression. By identifying nanoparticle characteristics that influence microbial interactions, this study seeks to support the safe and effective application of TiO 2 in agriculture. The findings will contribute to understanding how TiO 2 NPs impact biogenic element cycles and agroecosystem stability, enabling the development of strategies for their sustainable use. 2.1. Bacteriobiome Characteristics Based on the results of the rarefaction analysis, it was observed that the differences in the abundance structures of individuals and the abundance of OTUs (operational taxonomic units) were similar in all treatments. The lowest abundances of individuals and OTUs were observed for TiO 2 NPs1, and the opposite results were obtained for the untreated rhizosphere ( A). Based on the dissimilarity results based on the Bray–Curtis matrix for bacteriobiomes, a complete dissimilarity in the structure of OTUs for TiO 2 NPs1 was observed (branching above the threshold line). The remaining treatments did not show any significant differences from each other. However, the most remarkable similarity was observed between TiO 2 Com and TiO 2 NPs2. This group was the most different from the TiO 2 NPs1 treatment, while the control was characterized by partial similarity to both TiO 2 NPs1 and the TiO 2 Com-TiO 2 NPs2 group ( B). In each treatment, the dominant bacterial OTUs included Vicinamibacterales (order), Gemmatimonadaceae (family), Vicinamibacteraceae (family), Devosia (genus), and Saprospiraceae (family). The use of TiO 2 NPs1 resulted in a significant reduction in the most numerous OTU, i.e., Vicinamibacterales (order). In the case of the remaining OTUs mentioned above, no significant changes were observed compared to the control. In the TiO 2 NPs1 treatment, an increase in the share of the following species was observed: Arthrobacter , Polaromonas and Pseudomonas , while after using both forms of the tested titanium dioxide nanoparticles, an increase in the share of Acidibacter spp. and the Rhizobiales order was observed. The commercial preparation did not affect the structure of the rhizosphere bacteriome. Regarding diversity indices, no clear differences were found between the treatments . In the case of the load carrying plant growth promotion (PGP) traits ( C), the total amount of loads reflected the dissimilarity presented at B. The TiO 2 NPs1 treatment was characterized by the highest PGP potential for all 10 tested features. In control, the bacteriobiome had a high potential for H 2 S production and an average potential for producing auxins, ethylene, protease, phosphatases, N-fixation I, and CO 2 -fixation. A very low load of bacteria-producing gibberellins and siderophores was also observed in this treatment. TiO 2 Com and TiO 2 NPs2 treatments had similar feature loads. In both treatments, only an average potential for the production of gibberellin and siderophores was observed, and the remaining features were at a low level in the case of TiO 2 NPs2 or very low in the case of TiO 2 Com. Based on the results of the PCA for bacteriobiomes and PGP traits , similar relationships between treatments were observed as in the dissimilarity analysis. The associated variables (high value traits) for the controls were Micropepsaceae, Devosia , Chitinophagaceae and Chloroflexi . TiO 2 Com-related variables included Vicinamibacteraceae, Gemmatimonadaceae and Saprospiraceae. In contrast, for TiO 2 NPs2, the same variables existed as for TiO 2 Com, in addition to the variables Acidibacter and Terrimonas , gibberellins and siderophores. For TiO 2 NPs1, these were all the PGP traits studied, plus Acidibacter and Terrimonas . Moreover, Vicinamibacteraceae were associated with all treated treatments. Note that most of the variables analyzed for the rhizosphere treated with any form of TiO 2 were not associated with the control treatment ( A). The TiO 2 NP1 treatment, unlike the other treatments, was characterized by the highest load of features indicating the chemoherotrophic nature of the bacteriobiome (N-cycle, decomposition of organic matter). However, compared to the control, it also had a higher load of photoautotrophs . 2.2. Mycobiome Characteristics The results of the rarefaction analysis were different from those of the bacteriobiomes. The obtained curves revealed that each of the applied forms of TiO 2 increased the size and density of OTUs compared to the control ( D). Based on the dominance class analysis and Fisher’s exact test results, much greater differentiation was observed between OTUs of mycobiomes than between bacteriobiomes under the influence of the forms of titanium dioxide used. In the case of Sebacinales, a significant decrease in the share of this order of fungi was observed after the use of any of the tested forms of titanium dioxide. Moreover, in the case of TiO 2 NPs1, the reduction was significant (almost 6-fold), and this order passed from the class of eudominants to dominants. In the Entoloma genus case, a moderate share decrease (approximately 20%) was observed when TiO 2 Com was used. Phylum Ascomycota increased almost twice in the case of TiO 2 Com and significantly in the case of TiO 2 NPs1 and TiO 2 NPs2. With the addition of nanoparticles, the dominance class changed from subdominant to eudominant. In the case of the remaining OTUs, no eudomination was observed, but a large diversity of changes in OTU abundance was observed depending on the treatment with titanium dioxide forms. Moreover, a common feature of all forms was a moderate reduction in the share of Pyronemataceae and Byssochlamys OTUs . The treatment using the molecular form of titanium dioxide was characterized by the largest number of OTUs that moved from the occasional to the rare class. Nevertheless, a significant increase in the shares of Humicola and Trechispora was also observed, as well as their transition from the class of occasional to dominant individuals. Moreover, a moderate increase in the proportion of Chaetomium and Pseudogymnoascus was observed in TiO2Com. In the case of Mortierella , Sordariales, Pezizales, Terfezia , Penicillium and Oidiodendron , a moderate increase in these OTUs was observed after TiO 2 NPs1 treatment. In this treatment, a significant increase in the share of Chaetomium was also observed (over 13-fold) with a simultaneous change in the dominance class from occasional to subdominant individuals . Moreover, there was a significant decrease (22-fold) in the share of fungi of the Chrysosporium genus and a change in the dominance class from dominants to occasional individuals. The TiO 2 NPs2 treatment was characterized by a significant decrease in the share of Chrysosporium and a change in the class from dominant to rare individuals. A moderate increase in proportion and dominance class from occasional or rare to subdominant was observed for OTUs Humicola , Chaetomium , Sordariales, Ascobolus , Pseudogymnoascus , Nadsonia , Terfezia and Trichoderma . The calculated diversity indices for the mycobiome showed that the control was twice as dominant and had the lowest diversity and uniformity. The lowest dominance and the highest diversity characterized the TiO 2 Com and TiO 2 NPs1 treatments. Using each form of titanium dioxide resulted in a statistically significant change in the parameters of diversity indicators . In the case of Bray–Curtis dissimilarity, the structures of mycobiomes on which any form of titanium dioxide was used differed significantly from the control. TiO 2 NPs1 and TiO 2 NPs2 were the most similar to each other, while TiO 2 Com was an intermediate form between the control and mycobiomes on which nanoparticle forms were used ( E). Based on a non-standardized analysis of general trophic features of mycobiomes, no clear changes in their share were observed. A slight increase in the share of potentially beneficial fungi was observed after adding each of the forms of TiO 2 and a slight decrease in saprotrophs in favour of potential phytopathogens in the case of TiO 2 NPs1 ( F). The PCA results showed an analogous similarity to the dissimilarity results shown on the dendrogram ( E). Analyzing the associations between variables and treatments, it was observed that the variables with the highest values for the control were Pyronemataceae, Sebacinales, Byssochlamys and Chrysosporium . The variables correlated with TiO 2 Com included Humicola , Trechispora , and Chrysosporium , which were correlated with saprotrophs and beneficials. The TiO 2 NPs group was common to both treatments, although the correlations varied in strength. This group included Entoloma , Ascomycota, Chaetomium , Ascobolus , Pezizales, Terfezia , Trichoderma , and fungi OTUs, and weakly associated Oidiodendron and Candida . The large majority of these OTUs were correlated with phytopathogenes. However, all treatments except the control were associated with increased loads of fungal beneficiens ( B). 2.3. Predicting the Function of Microbiome, Network and PLS-PM Analysis Based on the analysis of potential physiological activity, it was observed that the overall highest activity occurred in the TiO 2 NPs1 treatment. It was also the treatment with the most significant difference compared to the other treatments, especially with regard to TiO 2 Com and TiO 2 NPs2. The highest number of microorganisms capable of the metabolism of a wide spectrum of organic substances and heterotrophic lifestyle, photoautotrophs and nitrogen respiration characterized the TiO 2 NPs1 treatment. The control was characterized by a high number of parasitic, chitinolytic and iron-respiratory bacteria. TiO 2 NPs1 and controls were collectively characterized by a microbiome capable of fermentation, chemoheterotrophy, and ureolysis. Dark sulphur/sulphide oxidation abundance was observed in the TiO 2 Com and TiO 2 NPs2 treatments. Nevertheless, the abundance of photoautotrophic bacteria was lower than in the TiO 2 NPs1 treatment but higher than in the control ( and ). More detailed data indicate that the TiO 2 NPs1 treatment showed the greatest number of associations between metabolic and microbial features. In this group, several key features of microbial metabolism were observed, such as those related to the reduction of nitrogen compounds, fermentation, proteolysis, ureolysis, cellulolysis, ligninolysis, CO 2 -fixation, production of phytohormones, acid and alkaline phosphatases, siderophore production, methylotrophy, sulphur and methanol oxidation, phototrophic processes, degradation of organic compounds, and, to a lesser extent, denitrification and chemoheterotrophy. These characteristics were closely associated with various OTUs of bacteria such as Pseudomonadaceae, Arthrobacter , Polaromonas , Rhizobiales, Terrimonas , Acidibacter , Vicinamibacteraceae , as well as with some OTUs of fungi such as Entoloma , Mortierella , Candida , Chaetomium , Terfezia , Sordariales, Oidiodendron , Penicillium , Pezizales, Ascobolus and Ascomycota . The rhizosphere that was not treated with titanium dioxide compounds showed the formation of two subgroups that were not strongly related to each other. These two groups were linked by the fungus Byssochlamys , which was highly correlated with most traits in this group. The first subgroup included features related to denitrification, or aerobic or general chemoheterotrophy associated with Flavobacterium , H 2 S production and fermentation, and some parameters characteristic of the TiO 2 NPs1 treatment. The second group contained features partially shared with those of the TiO 2 Com treatment (titanium dioxide in the form of large nanoparticles). It included chitinolysis, iron respiration, and phytopathogenicity, with a high abundance of OTUs of the bacteria Devosia , Pseudolabrys , Haliangium , Micropepsaceae, Briobacter , Alphaproteobacteria, Chitynophagaceae, Microscillaceae, Chloroflexi , as well as OTUs of fungi Sebacinales, Pyronemataceae, Mucronella and Chrysosporium . In the TiO 2 Com treatment, in addition to the subgroup containing features typical of the control rhizosphere, unique features were observed, such as oxidation of sulphur and sulphides in the dark, OTUs of Xanthobacteraceae, Bauldia and Vicinamibacterales bacteria, and Pseudogymnoascus and Nadsonia fungi. The third subgroup of the TiO 2 Com treatment contains features characteristic of the TiO 2 NPs2 treatment, such as sulphite and hydrogen oxidation in the dark, nitrogen fixation, and OTUs of the bacteria Hyphomonadaceae, Luteimonas , Polyangiales, Vicinamibacteraceae, as well as OTUs of the fungi Trichoderma and Ascomycota . Results of analysis based on the FungalTraits database show that, under the influence of the forms of titanium used, there were changes in the number of groups such as soil saprotrophs, wood saprotrophs and letter saprotrophs. Changes in the abundance of groups such as animal parasites, mycoparasites, ectomycorrhizal, nectar/tap saprotrophs and unspecified saprotrophs were insignificant. The highest abundance of soil saprotrophs and wood saprotrophs co-dominances was observed in the control treatment. In the TiO 2 Com treatment, wood saprotrophs dominated, and soil saprotrophs co-dominated. Applying TiO 2 NPs1 and TiO 2 NPs2 nanoparticles increased the share of soil saprotrophs in the communities. Nevertheless, an increase in saprotroph letters was observed in the case of TiO 2 NPs1. Based on cluster analysis, creating two similarity groups for fungal functional groups was possible. The first group included control and TiO 2 Com, and the second included TiO 2 NPs1 and TiO 2 . Nevertheless, the Bray–Curtis dissimilarity for mycobiome features in the first group was significant and amounted to 15, which could be compared to the dissimilarity in the nanoparticle group, which was 8 . In the partial least squares—path modelling (PLS-PM) analysis, the following correlation results between variables were obtained ( p < 0.05): TiO 2 size–Bacterial function r = 0.035, TiO 2 size–Fungal function r = 0.465, Bacterial OTUs–Bacterial diversity r = 0.832, Bacterial OTUs–Bacterial function r = −0.973, Bacterial diversity–Bacterial function r = −0.932, Fungal diversity–Fungal function r = −0.567, and Fungal OTUs–Fungal function r = −0.949. No statistically significant correlation was observed between the remaining groups of variables. More detailed data are presented in . Based on the results of the rarefaction analysis, it was observed that the differences in the abundance structures of individuals and the abundance of OTUs (operational taxonomic units) were similar in all treatments. The lowest abundances of individuals and OTUs were observed for TiO 2 NPs1, and the opposite results were obtained for the untreated rhizosphere ( A). Based on the dissimilarity results based on the Bray–Curtis matrix for bacteriobiomes, a complete dissimilarity in the structure of OTUs for TiO 2 NPs1 was observed (branching above the threshold line). The remaining treatments did not show any significant differences from each other. However, the most remarkable similarity was observed between TiO 2 Com and TiO 2 NPs2. This group was the most different from the TiO 2 NPs1 treatment, while the control was characterized by partial similarity to both TiO 2 NPs1 and the TiO 2 Com-TiO 2 NPs2 group ( B). In each treatment, the dominant bacterial OTUs included Vicinamibacterales (order), Gemmatimonadaceae (family), Vicinamibacteraceae (family), Devosia (genus), and Saprospiraceae (family). The use of TiO 2 NPs1 resulted in a significant reduction in the most numerous OTU, i.e., Vicinamibacterales (order). In the case of the remaining OTUs mentioned above, no significant changes were observed compared to the control. In the TiO 2 NPs1 treatment, an increase in the share of the following species was observed: Arthrobacter , Polaromonas and Pseudomonas , while after using both forms of the tested titanium dioxide nanoparticles, an increase in the share of Acidibacter spp. and the Rhizobiales order was observed. The commercial preparation did not affect the structure of the rhizosphere bacteriome. Regarding diversity indices, no clear differences were found between the treatments . In the case of the load carrying plant growth promotion (PGP) traits ( C), the total amount of loads reflected the dissimilarity presented at B. The TiO 2 NPs1 treatment was characterized by the highest PGP potential for all 10 tested features. In control, the bacteriobiome had a high potential for H 2 S production and an average potential for producing auxins, ethylene, protease, phosphatases, N-fixation I, and CO 2 -fixation. A very low load of bacteria-producing gibberellins and siderophores was also observed in this treatment. TiO 2 Com and TiO 2 NPs2 treatments had similar feature loads. In both treatments, only an average potential for the production of gibberellin and siderophores was observed, and the remaining features were at a low level in the case of TiO 2 NPs2 or very low in the case of TiO 2 Com. Based on the results of the PCA for bacteriobiomes and PGP traits , similar relationships between treatments were observed as in the dissimilarity analysis. The associated variables (high value traits) for the controls were Micropepsaceae, Devosia , Chitinophagaceae and Chloroflexi . TiO 2 Com-related variables included Vicinamibacteraceae, Gemmatimonadaceae and Saprospiraceae. In contrast, for TiO 2 NPs2, the same variables existed as for TiO 2 Com, in addition to the variables Acidibacter and Terrimonas , gibberellins and siderophores. For TiO 2 NPs1, these were all the PGP traits studied, plus Acidibacter and Terrimonas . Moreover, Vicinamibacteraceae were associated with all treated treatments. Note that most of the variables analyzed for the rhizosphere treated with any form of TiO 2 were not associated with the control treatment ( A). The TiO 2 NP1 treatment, unlike the other treatments, was characterized by the highest load of features indicating the chemoherotrophic nature of the bacteriobiome (N-cycle, decomposition of organic matter). However, compared to the control, it also had a higher load of photoautotrophs . The results of the rarefaction analysis were different from those of the bacteriobiomes. The obtained curves revealed that each of the applied forms of TiO 2 increased the size and density of OTUs compared to the control ( D). Based on the dominance class analysis and Fisher’s exact test results, much greater differentiation was observed between OTUs of mycobiomes than between bacteriobiomes under the influence of the forms of titanium dioxide used. In the case of Sebacinales, a significant decrease in the share of this order of fungi was observed after the use of any of the tested forms of titanium dioxide. Moreover, in the case of TiO 2 NPs1, the reduction was significant (almost 6-fold), and this order passed from the class of eudominants to dominants. In the Entoloma genus case, a moderate share decrease (approximately 20%) was observed when TiO 2 Com was used. Phylum Ascomycota increased almost twice in the case of TiO 2 Com and significantly in the case of TiO 2 NPs1 and TiO 2 NPs2. With the addition of nanoparticles, the dominance class changed from subdominant to eudominant. In the case of the remaining OTUs, no eudomination was observed, but a large diversity of changes in OTU abundance was observed depending on the treatment with titanium dioxide forms. Moreover, a common feature of all forms was a moderate reduction in the share of Pyronemataceae and Byssochlamys OTUs . The treatment using the molecular form of titanium dioxide was characterized by the largest number of OTUs that moved from the occasional to the rare class. Nevertheless, a significant increase in the shares of Humicola and Trechispora was also observed, as well as their transition from the class of occasional to dominant individuals. Moreover, a moderate increase in the proportion of Chaetomium and Pseudogymnoascus was observed in TiO2Com. In the case of Mortierella , Sordariales, Pezizales, Terfezia , Penicillium and Oidiodendron , a moderate increase in these OTUs was observed after TiO 2 NPs1 treatment. In this treatment, a significant increase in the share of Chaetomium was also observed (over 13-fold) with a simultaneous change in the dominance class from occasional to subdominant individuals . Moreover, there was a significant decrease (22-fold) in the share of fungi of the Chrysosporium genus and a change in the dominance class from dominants to occasional individuals. The TiO 2 NPs2 treatment was characterized by a significant decrease in the share of Chrysosporium and a change in the class from dominant to rare individuals. A moderate increase in proportion and dominance class from occasional or rare to subdominant was observed for OTUs Humicola , Chaetomium , Sordariales, Ascobolus , Pseudogymnoascus , Nadsonia , Terfezia and Trichoderma . The calculated diversity indices for the mycobiome showed that the control was twice as dominant and had the lowest diversity and uniformity. The lowest dominance and the highest diversity characterized the TiO 2 Com and TiO 2 NPs1 treatments. Using each form of titanium dioxide resulted in a statistically significant change in the parameters of diversity indicators . In the case of Bray–Curtis dissimilarity, the structures of mycobiomes on which any form of titanium dioxide was used differed significantly from the control. TiO 2 NPs1 and TiO 2 NPs2 were the most similar to each other, while TiO 2 Com was an intermediate form between the control and mycobiomes on which nanoparticle forms were used ( E). Based on a non-standardized analysis of general trophic features of mycobiomes, no clear changes in their share were observed. A slight increase in the share of potentially beneficial fungi was observed after adding each of the forms of TiO 2 and a slight decrease in saprotrophs in favour of potential phytopathogens in the case of TiO 2 NPs1 ( F). The PCA results showed an analogous similarity to the dissimilarity results shown on the dendrogram ( E). Analyzing the associations between variables and treatments, it was observed that the variables with the highest values for the control were Pyronemataceae, Sebacinales, Byssochlamys and Chrysosporium . The variables correlated with TiO 2 Com included Humicola , Trechispora , and Chrysosporium , which were correlated with saprotrophs and beneficials. The TiO 2 NPs group was common to both treatments, although the correlations varied in strength. This group included Entoloma , Ascomycota, Chaetomium , Ascobolus , Pezizales, Terfezia , Trichoderma , and fungi OTUs, and weakly associated Oidiodendron and Candida . The large majority of these OTUs were correlated with phytopathogenes. However, all treatments except the control were associated with increased loads of fungal beneficiens ( B). Based on the analysis of potential physiological activity, it was observed that the overall highest activity occurred in the TiO 2 NPs1 treatment. It was also the treatment with the most significant difference compared to the other treatments, especially with regard to TiO 2 Com and TiO 2 NPs2. The highest number of microorganisms capable of the metabolism of a wide spectrum of organic substances and heterotrophic lifestyle, photoautotrophs and nitrogen respiration characterized the TiO 2 NPs1 treatment. The control was characterized by a high number of parasitic, chitinolytic and iron-respiratory bacteria. TiO 2 NPs1 and controls were collectively characterized by a microbiome capable of fermentation, chemoheterotrophy, and ureolysis. Dark sulphur/sulphide oxidation abundance was observed in the TiO 2 Com and TiO 2 NPs2 treatments. Nevertheless, the abundance of photoautotrophic bacteria was lower than in the TiO 2 NPs1 treatment but higher than in the control ( and ). More detailed data indicate that the TiO 2 NPs1 treatment showed the greatest number of associations between metabolic and microbial features. In this group, several key features of microbial metabolism were observed, such as those related to the reduction of nitrogen compounds, fermentation, proteolysis, ureolysis, cellulolysis, ligninolysis, CO 2 -fixation, production of phytohormones, acid and alkaline phosphatases, siderophore production, methylotrophy, sulphur and methanol oxidation, phototrophic processes, degradation of organic compounds, and, to a lesser extent, denitrification and chemoheterotrophy. These characteristics were closely associated with various OTUs of bacteria such as Pseudomonadaceae, Arthrobacter , Polaromonas , Rhizobiales, Terrimonas , Acidibacter , Vicinamibacteraceae , as well as with some OTUs of fungi such as Entoloma , Mortierella , Candida , Chaetomium , Terfezia , Sordariales, Oidiodendron , Penicillium , Pezizales, Ascobolus and Ascomycota . The rhizosphere that was not treated with titanium dioxide compounds showed the formation of two subgroups that were not strongly related to each other. These two groups were linked by the fungus Byssochlamys , which was highly correlated with most traits in this group. The first subgroup included features related to denitrification, or aerobic or general chemoheterotrophy associated with Flavobacterium , H 2 S production and fermentation, and some parameters characteristic of the TiO 2 NPs1 treatment. The second group contained features partially shared with those of the TiO 2 Com treatment (titanium dioxide in the form of large nanoparticles). It included chitinolysis, iron respiration, and phytopathogenicity, with a high abundance of OTUs of the bacteria Devosia , Pseudolabrys , Haliangium , Micropepsaceae, Briobacter , Alphaproteobacteria, Chitynophagaceae, Microscillaceae, Chloroflexi , as well as OTUs of fungi Sebacinales, Pyronemataceae, Mucronella and Chrysosporium . In the TiO 2 Com treatment, in addition to the subgroup containing features typical of the control rhizosphere, unique features were observed, such as oxidation of sulphur and sulphides in the dark, OTUs of Xanthobacteraceae, Bauldia and Vicinamibacterales bacteria, and Pseudogymnoascus and Nadsonia fungi. The third subgroup of the TiO 2 Com treatment contains features characteristic of the TiO 2 NPs2 treatment, such as sulphite and hydrogen oxidation in the dark, nitrogen fixation, and OTUs of the bacteria Hyphomonadaceae, Luteimonas , Polyangiales, Vicinamibacteraceae, as well as OTUs of the fungi Trichoderma and Ascomycota . Results of analysis based on the FungalTraits database show that, under the influence of the forms of titanium used, there were changes in the number of groups such as soil saprotrophs, wood saprotrophs and letter saprotrophs. Changes in the abundance of groups such as animal parasites, mycoparasites, ectomycorrhizal, nectar/tap saprotrophs and unspecified saprotrophs were insignificant. The highest abundance of soil saprotrophs and wood saprotrophs co-dominances was observed in the control treatment. In the TiO 2 Com treatment, wood saprotrophs dominated, and soil saprotrophs co-dominated. Applying TiO 2 NPs1 and TiO 2 NPs2 nanoparticles increased the share of soil saprotrophs in the communities. Nevertheless, an increase in saprotroph letters was observed in the case of TiO 2 NPs1. Based on cluster analysis, creating two similarity groups for fungal functional groups was possible. The first group included control and TiO 2 Com, and the second included TiO 2 NPs1 and TiO 2 . Nevertheless, the Bray–Curtis dissimilarity for mycobiome features in the first group was significant and amounted to 15, which could be compared to the dissimilarity in the nanoparticle group, which was 8 . In the partial least squares—path modelling (PLS-PM) analysis, the following correlation results between variables were obtained ( p < 0.05): TiO 2 size–Bacterial function r = 0.035, TiO 2 size–Fungal function r = 0.465, Bacterial OTUs–Bacterial diversity r = 0.832, Bacterial OTUs–Bacterial function r = −0.973, Bacterial diversity–Bacterial function r = −0.932, Fungal diversity–Fungal function r = −0.567, and Fungal OTUs–Fungal function r = −0.949. No statistically significant correlation was observed between the remaining groups of variables. More detailed data are presented in . 3.1. Bacteriobiome The structure of the bacterial rhizobiome did not undergo significant changes within individual OTUs. Nevertheless, regarding changes in dominance and abundance classes, the control was most similar to TiO 2 Com and least similar to nanoparticles. The TiO 2 NPs1 treatment showed the greatest difference, increasing the share of bacteria belonging to Actinobacteria and Proteobacteria. The taxa that increased their share in the community belonged mainly to heterotrophic microorganisms with high adaptability to environmental changes. However, together with heterotrophs, the number of photosynthetic microorganisms also increased. The greatest potential of PGP in this treatment is directly related to the presence of heterotrophs and the R strategy . Despite large changes in the structure of OTUs between TiO 2 NPs1 and control, in the case of TiO 2 NPs1, the smallest changes in the functions represented by fungi were observed. The largest volume of features useful for plants represented by PGPR was characteristic of TiO 2 NPs1. However, a decline in other features was observed in TiO 2 Com and TiO 2 NPs2, apart from gibberellins and siderophores. The high PGP potential, represented by both the number of microorganisms and the number of features, indicates that rhizosphere bacteria can enter into symbiosis with plant roots, protecting them against pathogens. Such features are characteristic of heterotrophic bacteria and some photoautotrophs, enabling them to enter symbiosis with the plant. Indirectly, they also enable the survival of the rhizosphere structure in dynamically changing habitat conditions, such as unfavourable abiotic conditions or phytopathogen pressure. Moreover, with TiO 2 NPs1 treatment, a number of PGPR features that could potentially contribute to promoting plant growth and defending the plant against unfavourable biotic and abiotic factors were seen . Among metal nanoparticles, the most attention is paid to silver, and there we observed certain regularities in relation to our results. Silver nanoparticles and their ionic forms are an excellent example of the impact of nanoparticles on the soil bacteriobiome. Research conducted in proved that Ag nanoparticles significantly impact changes in coprophilic bacteria, especially beta-proteobacteria. However, the authors also emphasize that these changes result from interactions, so other environmental variables influence these changes, rather than adding various nanoparticle types. Moll et al. investigated the effect of higher doses of TiO2NPs than in this work. As a result, they obtained partially different results. The bacteriobiome was highly significantly modified under the influence of titanium dioxide nanoparticles. Moreover, Proteobacteria and Cloroflexi , but not Actinobacteria, were more likely to associate with very large nanoparticles (sizes > 100 nm). Proteobacteria, Firmicutes, Planctomycetes and some Actinobacteria were associated with large nanoparticles (145 nm), while most bacterial phyla but not Planctomycetes and Chloroflexi were associated with small nanoparticles (29 nm). However, the results of Moll et al. partially differ from our results and indicate the effects of formation of the R strategy in small-sized TiO 2 -treated soil. However, it should be remembered that we examined the wheat rhizosphere in this study, not bulk soil. 3.2. Mycobiome For the functional groups of mycobiomes analyzed, the nanoparticles were proven to interact with the rearrangement of the fungal community structure differently from the molecular form of titanium dioxide. Nevertheless, the moderate similarity of the TiO2Com mycobiomes concerning controls indicates that this form of titanium dioxide interacts with the structure of the mycobiomes in a unique way. A more detailed approach highlights that the main difference between the two groups (nanoparticle and control/ TiO 2 Com) is due to the fact that the application of both forms of nanoparticles drastically reduced the abundance of wood saprotrophs in favour of soil saprotrophs. The most unique change for the particulate formed vis-à-vis the control was the significant increase in Humicola and Trechispora fungi. In the case of Humicola spp. the fungus was used to biotransform particulate forms of TiO 2 to nanoparticulate forms that exhibit antibiotic activity . This indicates that the genus of these fungi is insensitive to TiO 2 nanoparticles, but also, during biotransformation, the particulate forms can participate in the intracellular metabolism of this organism. Therefore, it explains the more than 10-fold increase in the proportion of these fungi in the TiO 2 Com treatment and the predominant nature and noticeable increase in the proportion in the nanoparticle treatments. Fungi of the genus Trechispora are not described in the literature, which probably indicates the discovery of a strain or species resistant to TiO 2 in particulate form. Nevertheless, its deficient proportion in the other treatments may indicate the use of metal oxides in cellular metabolism. Moreover, the sensitivity of this genus to environmental metal pollution has been confirmed . It should also be noted that the treatment caused an increase in the overall proportion of saprotrophs, which is mainly due to a significant increase in the proportion of the two fungal genera described above. In the case of TiO 2 NPs1, a statistically significant increase in the proportion of fungal genera against the control was recorded only for Chaetomium . According to the FunglaTraits database, this fungus can be an endophyte of plants and has a hyper-fertile lifestyle. Nevertheless, under certain conditions, it causes soft rot. It can sometimes be a facultative plant pathogen, which is why, as in previous work, we also classified this fungal genus as a phytopathogen . Both nanoparticle sizes used contributed to a significant reduction in the proportion of fungi of the genus Chrysosporium . As reported in the FungalTraits database, this is the fungus mainly responsible for wood decomposition, and, as such, may be responsible for the mineralization of organic matter in soil environments. A study by Binkauskiene et al. used C. merdarium to investigate the degradation of TiO 2 surfaces. The research team demonstrated that this species could grow and activate its metabolism when in contact with TiO 2 , but that successful survival was linked to the formation of 2,2,2-cryptand complexes with Ca 2+ ions. This suggests that, for this species, the success of survival in the environment is related to the availability of calcium ions during exposure to TiO 2 . However, it needs to be confirmed whether this is the case for nanoparticle forms. Concerning TiO 2 NPs1 and TiO 2 NPs2, it was observed that, although not significantly, a large number of OTUs moved two classes higher (from occasional to dominant individuals). A detailed discussion of each would reduce the readability of this research paper, so the focus is on describing global relationships in the following subsection. In contrast, this section focuses on a species of fungus with valuable, highly useful cultivated crops, i.e., Trichoderma sp. It is difficult to explain the mechanism of resistance of Trichoderma fungi to low-particle TiO 2 and its molecular forms. Nevertheless, the literature emphasizes that NPs of biological origin, including TiO 2 NPs of various types, can be synthesized using Trichoderma spp. This proves that the detoxification apparatus of this type of fungus is efficient, even to forms of metals with greater toxicity . The above-mentioned observations explain why the Trichoderma genus was the most numerous in the “small TiO 2 ” treatment. This proves that this fungus uses a vacant niche created after applying a form of Ti that is more toxic to the general rhizobiome community. It should be noted that fungi of the Trichoderma genus have a solid potential to resist and even absorb heavy metals from the environment. These fungi probably have metabolic mechanisms for tolerance or detoxification of titanium dioxide, unlike other fungi in the rhizobiome . 3.3. Relationship Between Changes in Microbiomes and Rhizobiome Metabolism The results obtained using the network analysis module consolidated the knowledge indicating the comprehensive impact of various forms of titanium dioxide nanoparticles on the wheat rhizosphere microbiota and its potential consequences for the functioning of the rhizosphere ecosystem. Concerning the rhizosphere fungal community, nanoparticles had a stronger effect than they had on bacteria. The changes in the trophic groups of fungi under the influence of the TiO 2 forms used were not very drastic despite quite significant changes in the proportions and sizable changes in the dominance classes and taxonomic structures of mycobiomes. For bacteriobiomes, the opposite trend was observed. The insignificant changes in the abundance of many taxa were reflected in relatively drastic changes in the global metabolism of the bacteriobiomes. Applying TiO 2 NPs2 nanoparticles increased the abundance of Trichoderma fungi in the rhizosphere fungal community. However, the high number of Trichoderma spp. was associated with a low number of bacterial functional groups, mainly chemoautotrophs. These results, combined with the ecological characteristics described by Ling et al. , indicate that Trichoderma requires special habitat conditions to thrive. Furthermore, it is likely that this fungus does not fully prefer dynamic and unstable bacterial rhizobiomes (r-strategy). Therefore, the genus grew best in the TiO 2 NPs2 treatment, which was moderately oligotrophic (strategy-K). For both nanoparticle types, there was a characteristic large group of unidentified Ascomycota, but also the highest proportion of typical soil saprotrophic fungi as seen in the heat map for the FungalTraits results at Network analysis. These fungi thrived best in the TiO 2 NPs1 treatment, as evidenced by the type of fungi readily decomposing organic matter, such as Penicillium sp., and possessing numerous traits helpful in maintaining the rhizosphere microbiome. Confirmed traits include the production of IAA, siferophores, or P-solubilization and, in the endophytic form, protection of the plant from the adverse effects of heavy metals, drought and cold . In this case, bacteria in the TiO 2 NPs1 treatment also showed the potential described above, with an increased proportion of typical r-strategy bacteria belonging to Actinobacteria and Proteobacteria, indicating the intensification of heterotrophy. The structure of this rhizobiome brings the plant a complex of features that have a high potential for promoting growth, such as reduced nitrogen compounds, fermentation, proteolysis, ureolysis, cellulose and lignocellulose decomposition, CO 2 -fixation, production of phytohormones, acid and alkaline phosphatases, production of siderophores, methylotrophy, oxidation of sulphur and methanol, phototrophic processes and degradation of organic compounds. These features were correlated with different OTUs of bacteria and fungi and indicate the presence of an environmentally flexible rhizobiome, potentially beneficial for plant development . The results of the PLS-PM analysis indicate significant relationships between microbial diversity and their ecosystem functions. For both bacteria and fungi, a negative correlation was observed between diversity (or the number of OTUs) and ecosystem function, suggesting that increased diversity does not always lead to improved ecosystem functions. The results suggest the possibility of a functional conflict, where a higher number of species may compete for resources, negatively impacting their functional capabilities, which corresponding with the previously mentioned strategies for the development of r and K. Additionally, the size of TiO 2 particles has a clear impact on fungal functions, which may be important for further research into the effects of nanomaterials on microorganisms. The structure of the bacterial rhizobiome did not undergo significant changes within individual OTUs. Nevertheless, regarding changes in dominance and abundance classes, the control was most similar to TiO 2 Com and least similar to nanoparticles. The TiO 2 NPs1 treatment showed the greatest difference, increasing the share of bacteria belonging to Actinobacteria and Proteobacteria. The taxa that increased their share in the community belonged mainly to heterotrophic microorganisms with high adaptability to environmental changes. However, together with heterotrophs, the number of photosynthetic microorganisms also increased. The greatest potential of PGP in this treatment is directly related to the presence of heterotrophs and the R strategy . Despite large changes in the structure of OTUs between TiO 2 NPs1 and control, in the case of TiO 2 NPs1, the smallest changes in the functions represented by fungi were observed. The largest volume of features useful for plants represented by PGPR was characteristic of TiO 2 NPs1. However, a decline in other features was observed in TiO 2 Com and TiO 2 NPs2, apart from gibberellins and siderophores. The high PGP potential, represented by both the number of microorganisms and the number of features, indicates that rhizosphere bacteria can enter into symbiosis with plant roots, protecting them against pathogens. Such features are characteristic of heterotrophic bacteria and some photoautotrophs, enabling them to enter symbiosis with the plant. Indirectly, they also enable the survival of the rhizosphere structure in dynamically changing habitat conditions, such as unfavourable abiotic conditions or phytopathogen pressure. Moreover, with TiO 2 NPs1 treatment, a number of PGPR features that could potentially contribute to promoting plant growth and defending the plant against unfavourable biotic and abiotic factors were seen . Among metal nanoparticles, the most attention is paid to silver, and there we observed certain regularities in relation to our results. Silver nanoparticles and their ionic forms are an excellent example of the impact of nanoparticles on the soil bacteriobiome. Research conducted in proved that Ag nanoparticles significantly impact changes in coprophilic bacteria, especially beta-proteobacteria. However, the authors also emphasize that these changes result from interactions, so other environmental variables influence these changes, rather than adding various nanoparticle types. Moll et al. investigated the effect of higher doses of TiO2NPs than in this work. As a result, they obtained partially different results. The bacteriobiome was highly significantly modified under the influence of titanium dioxide nanoparticles. Moreover, Proteobacteria and Cloroflexi , but not Actinobacteria, were more likely to associate with very large nanoparticles (sizes > 100 nm). Proteobacteria, Firmicutes, Planctomycetes and some Actinobacteria were associated with large nanoparticles (145 nm), while most bacterial phyla but not Planctomycetes and Chloroflexi were associated with small nanoparticles (29 nm). However, the results of Moll et al. partially differ from our results and indicate the effects of formation of the R strategy in small-sized TiO 2 -treated soil. However, it should be remembered that we examined the wheat rhizosphere in this study, not bulk soil. For the functional groups of mycobiomes analyzed, the nanoparticles were proven to interact with the rearrangement of the fungal community structure differently from the molecular form of titanium dioxide. Nevertheless, the moderate similarity of the TiO2Com mycobiomes concerning controls indicates that this form of titanium dioxide interacts with the structure of the mycobiomes in a unique way. A more detailed approach highlights that the main difference between the two groups (nanoparticle and control/ TiO 2 Com) is due to the fact that the application of both forms of nanoparticles drastically reduced the abundance of wood saprotrophs in favour of soil saprotrophs. The most unique change for the particulate formed vis-à-vis the control was the significant increase in Humicola and Trechispora fungi. In the case of Humicola spp. the fungus was used to biotransform particulate forms of TiO 2 to nanoparticulate forms that exhibit antibiotic activity . This indicates that the genus of these fungi is insensitive to TiO 2 nanoparticles, but also, during biotransformation, the particulate forms can participate in the intracellular metabolism of this organism. Therefore, it explains the more than 10-fold increase in the proportion of these fungi in the TiO 2 Com treatment and the predominant nature and noticeable increase in the proportion in the nanoparticle treatments. Fungi of the genus Trechispora are not described in the literature, which probably indicates the discovery of a strain or species resistant to TiO 2 in particulate form. Nevertheless, its deficient proportion in the other treatments may indicate the use of metal oxides in cellular metabolism. Moreover, the sensitivity of this genus to environmental metal pollution has been confirmed . It should also be noted that the treatment caused an increase in the overall proportion of saprotrophs, which is mainly due to a significant increase in the proportion of the two fungal genera described above. In the case of TiO 2 NPs1, a statistically significant increase in the proportion of fungal genera against the control was recorded only for Chaetomium . According to the FunglaTraits database, this fungus can be an endophyte of plants and has a hyper-fertile lifestyle. Nevertheless, under certain conditions, it causes soft rot. It can sometimes be a facultative plant pathogen, which is why, as in previous work, we also classified this fungal genus as a phytopathogen . Both nanoparticle sizes used contributed to a significant reduction in the proportion of fungi of the genus Chrysosporium . As reported in the FungalTraits database, this is the fungus mainly responsible for wood decomposition, and, as such, may be responsible for the mineralization of organic matter in soil environments. A study by Binkauskiene et al. used C. merdarium to investigate the degradation of TiO 2 surfaces. The research team demonstrated that this species could grow and activate its metabolism when in contact with TiO 2 , but that successful survival was linked to the formation of 2,2,2-cryptand complexes with Ca 2+ ions. This suggests that, for this species, the success of survival in the environment is related to the availability of calcium ions during exposure to TiO 2 . However, it needs to be confirmed whether this is the case for nanoparticle forms. Concerning TiO 2 NPs1 and TiO 2 NPs2, it was observed that, although not significantly, a large number of OTUs moved two classes higher (from occasional to dominant individuals). A detailed discussion of each would reduce the readability of this research paper, so the focus is on describing global relationships in the following subsection. In contrast, this section focuses on a species of fungus with valuable, highly useful cultivated crops, i.e., Trichoderma sp. It is difficult to explain the mechanism of resistance of Trichoderma fungi to low-particle TiO 2 and its molecular forms. Nevertheless, the literature emphasizes that NPs of biological origin, including TiO 2 NPs of various types, can be synthesized using Trichoderma spp. This proves that the detoxification apparatus of this type of fungus is efficient, even to forms of metals with greater toxicity . The above-mentioned observations explain why the Trichoderma genus was the most numerous in the “small TiO 2 ” treatment. This proves that this fungus uses a vacant niche created after applying a form of Ti that is more toxic to the general rhizobiome community. It should be noted that fungi of the Trichoderma genus have a solid potential to resist and even absorb heavy metals from the environment. These fungi probably have metabolic mechanisms for tolerance or detoxification of titanium dioxide, unlike other fungi in the rhizobiome . The results obtained using the network analysis module consolidated the knowledge indicating the comprehensive impact of various forms of titanium dioxide nanoparticles on the wheat rhizosphere microbiota and its potential consequences for the functioning of the rhizosphere ecosystem. Concerning the rhizosphere fungal community, nanoparticles had a stronger effect than they had on bacteria. The changes in the trophic groups of fungi under the influence of the TiO 2 forms used were not very drastic despite quite significant changes in the proportions and sizable changes in the dominance classes and taxonomic structures of mycobiomes. For bacteriobiomes, the opposite trend was observed. The insignificant changes in the abundance of many taxa were reflected in relatively drastic changes in the global metabolism of the bacteriobiomes. Applying TiO 2 NPs2 nanoparticles increased the abundance of Trichoderma fungi in the rhizosphere fungal community. However, the high number of Trichoderma spp. was associated with a low number of bacterial functional groups, mainly chemoautotrophs. These results, combined with the ecological characteristics described by Ling et al. , indicate that Trichoderma requires special habitat conditions to thrive. Furthermore, it is likely that this fungus does not fully prefer dynamic and unstable bacterial rhizobiomes (r-strategy). Therefore, the genus grew best in the TiO 2 NPs2 treatment, which was moderately oligotrophic (strategy-K). For both nanoparticle types, there was a characteristic large group of unidentified Ascomycota, but also the highest proportion of typical soil saprotrophic fungi as seen in the heat map for the FungalTraits results at Network analysis. These fungi thrived best in the TiO 2 NPs1 treatment, as evidenced by the type of fungi readily decomposing organic matter, such as Penicillium sp., and possessing numerous traits helpful in maintaining the rhizosphere microbiome. Confirmed traits include the production of IAA, siferophores, or P-solubilization and, in the endophytic form, protection of the plant from the adverse effects of heavy metals, drought and cold . In this case, bacteria in the TiO 2 NPs1 treatment also showed the potential described above, with an increased proportion of typical r-strategy bacteria belonging to Actinobacteria and Proteobacteria, indicating the intensification of heterotrophy. The structure of this rhizobiome brings the plant a complex of features that have a high potential for promoting growth, such as reduced nitrogen compounds, fermentation, proteolysis, ureolysis, cellulose and lignocellulose decomposition, CO 2 -fixation, production of phytohormones, acid and alkaline phosphatases, production of siderophores, methylotrophy, oxidation of sulphur and methanol, phototrophic processes and degradation of organic compounds. These features were correlated with different OTUs of bacteria and fungi and indicate the presence of an environmentally flexible rhizobiome, potentially beneficial for plant development . The results of the PLS-PM analysis indicate significant relationships between microbial diversity and their ecosystem functions. For both bacteria and fungi, a negative correlation was observed between diversity (or the number of OTUs) and ecosystem function, suggesting that increased diversity does not always lead to improved ecosystem functions. The results suggest the possibility of a functional conflict, where a higher number of species may compete for resources, negatively impacting their functional capabilities, which corresponding with the previously mentioned strategies for the development of r and K. Additionally, the size of TiO 2 particles has a clear impact on fungal functions, which may be important for further research into the effects of nanomaterials on microorganisms. 4.1. Experimental Setup This study involved a pot experiment conducted with Bombona cultivar spring wheat, grown in soil with moderate organic matter content. The soil was amended with TiO 2 nanoparticles of 68 nm (TiO 2 NPs1), 207 nm (TiO 2 NPs2), and a commercial TiO 2 preparation. Each treated group received a final TiO 2 concentration of 10 mg per kg of dry soil. Nanoparticles were purchased from Sigma Aldrich-MERCK Group (Darmstadt, Germany). Full property characterization and methodology were described in the previous work in the chapter “Physicochemical Characteristics of TiO 2 NPs” . The control group contained untreated soil. The experiment ran until wheat reached the tillering stage, at which point rhizosphere samples were collected, and DNA was extracted from each sample. Detailed descriptions of the experimental design, including the cultivar, growth chamber conditions, pot setup, and sampling method, were previously documented by Gorczyca et al. , Przemieniecki et al. and Przemieniecki et al. . Soil for the experiment was sourced from the Agricultural Experimental Station in Bałcyny, Poland, known for a high prevalence of Fusarium infections. Samples were then transported to the University of Warmia and Mazury in Olsztyn. The experiment was established on the humus layer of Haplic Luvisol soil, classified as a silty sandy loam of moderate quality (class IVa) with the addition of 25% turf substrate to encourage the growth of eukaryotic organisms, per the WRB classification . Soil was sieved to a 2 mm mesh, adjusted to 60% of its maximum water holding capacity, and placed into pots (2 kg per pot). Spring wheat (cv. Bombona) seeds were manually sown at a depth of 2 cm, with 6 seeds per pot. For additional control conditions, separate pots containing only soil without plants were prepared, to which either insect meal or nitrogen fertilizer was added at a rate of 180 kg N ha −1 , alongside an unamended control. The experiment took place in a climate-controlled chamber for 30 days, with environmental settings maintained at 22 °C during the day and 18 °C at night, under a 12 h photoperiod, light intensity of 220 μmol photons m −2 s −1 , and relative humidity of 80%. After germination, seedlings were thinned to 5 plants per pot as technical replicates. Following the plant growth period, the above-ground plant material and root systems were carefully removed from the pots, bulk soil was discarded, and the rhizosphere soil adhering to the roots was collected aseptically. Finally, roots were discarded, and rhizosphere samples were frozen at −20 °C for further analysis . 4.2. Sequencing Genomic DNA was extracted from rhizosphere samples using the GeneMATRIX Soil DNA Purification Kit (Gdańsk, Poland). Prior to extraction, samples were homogenized with the TissueLyser LT (Qiagen, Hilden, Germany), using glass beads and the kit’s lysis buffer to ensure thorough mixing. Sequencing was carried out by Genomed (Warszawa, Poland), following a protocol outlined in earlier studies . Microbial communities in the samples were analyzed by sequencing the V3–V4 region of the 16S rRNA gene for bacteria and the ITS region for fungi and other eukaryotes. Amplification of these gene regions was performed using Illumina-compatible primers: ITS3F (GCATCGATGAAGAACGCAGC) and ITS4R (TCCTCCGCTTATTGATATGC) for the fungal/eukaryotic ITS region, and 341F (CCTACGGGNGGCWGCAG) and 805R (GACTACHVGGGTATCTAATCC) for the bacterial 16S rRNA region. Illumina adapter overhang sequences were added to these primers for compatibility with the Illumina platform (San Diego, CA, USA). The amplicons were then indexed using the Nextera ® XT Index Kit (San Diego, CA, USA), following the manufacturer’s protocol, and sequenced on an Illumina MiSeq (San Diego, CA, USA), in paired-end mode (2 × 250 bp). The resulting sequencing data, saved in FASTQ format, were uploaded to the Metagenomics Rapid Annotation Subsystems Technology (MG-RAST) ( https://www.mg-rast.org/ accessed on 10 April 2024) server for further analysis. Quality control steps included filtering out sequences with five or more ambiguous bases and those significantly deviating from the mean length (±2 standard deviations). Low-abundance sequences (singletons or those representing less than 0.0005% of total abundance) were removed from analysis. The sequences are accessible in the MG-RAST database under project ID mgp96669. 4.3. Statistical Calculation and Data Analysis Taxonomic diversity of the analyzed OTUs was assessed using several indices: Chao1 for species richness, Simpson’s dominance index (λ) for measuring species dominance, Shannon’s diversity index (H’) for overall diversity, and Pielou’s evenness index (J’) to evaluate species distribution. Diversity indices and rarefaction curves were generated using PAST version 4.13 . Dominance classes for bacterial and fungal communities were assigned based on previous studies by the authors of , respectively. The MACADAM database was utilized to infer microbial community functions, referencing the MetaCyc database for plant growth-promoting (PGP) properties and FAPROTAX for metabolic function analysis . Fungal trait analysis was conducted using the FungalTraits database . Microbiome dissimilarities were calculated using Agglomerative Hierarchical Clustering (AHC) based on the Bray–Curtis method, with dendrograms created using Ward’s method. Pearson-based Mantel tests were used to compare the similarity matrices of different biome types. Principal Component Analysis (PCA) was conducted using a Pearson similarity matrix to analyze microbiome data (OTUs > 2%), PGP traits, and predicted microbial physiological characteristics. Heat maps and bubble charts were created to represent predicted metabolic functions and PGP characteristics sourced from the MACADAM and FungalTraits databases. Partial least squares path modelling (PLS-PM) was applied to examine both direct and indirect effects of the studied variable groups. The model was built using standardized manifest variable weights, with centroid estimation applied for internal calculations. Correlations at a 0.001 significance level and model fit indices were calculated. Statistical analyses were performed in XLSTAT and PAST version 4.13 . Network analysis was carried out using Gephi 0.9 with AxisForce 2 algorithm based on an “n-standardized” data matrix . The statistical approach presented here was used in a previous study . This study involved a pot experiment conducted with Bombona cultivar spring wheat, grown in soil with moderate organic matter content. The soil was amended with TiO 2 nanoparticles of 68 nm (TiO 2 NPs1), 207 nm (TiO 2 NPs2), and a commercial TiO 2 preparation. Each treated group received a final TiO 2 concentration of 10 mg per kg of dry soil. Nanoparticles were purchased from Sigma Aldrich-MERCK Group (Darmstadt, Germany). Full property characterization and methodology were described in the previous work in the chapter “Physicochemical Characteristics of TiO 2 NPs” . The control group contained untreated soil. The experiment ran until wheat reached the tillering stage, at which point rhizosphere samples were collected, and DNA was extracted from each sample. Detailed descriptions of the experimental design, including the cultivar, growth chamber conditions, pot setup, and sampling method, were previously documented by Gorczyca et al. , Przemieniecki et al. and Przemieniecki et al. . Soil for the experiment was sourced from the Agricultural Experimental Station in Bałcyny, Poland, known for a high prevalence of Fusarium infections. Samples were then transported to the University of Warmia and Mazury in Olsztyn. The experiment was established on the humus layer of Haplic Luvisol soil, classified as a silty sandy loam of moderate quality (class IVa) with the addition of 25% turf substrate to encourage the growth of eukaryotic organisms, per the WRB classification . Soil was sieved to a 2 mm mesh, adjusted to 60% of its maximum water holding capacity, and placed into pots (2 kg per pot). Spring wheat (cv. Bombona) seeds were manually sown at a depth of 2 cm, with 6 seeds per pot. For additional control conditions, separate pots containing only soil without plants were prepared, to which either insect meal or nitrogen fertilizer was added at a rate of 180 kg N ha −1 , alongside an unamended control. The experiment took place in a climate-controlled chamber for 30 days, with environmental settings maintained at 22 °C during the day and 18 °C at night, under a 12 h photoperiod, light intensity of 220 μmol photons m −2 s −1 , and relative humidity of 80%. After germination, seedlings were thinned to 5 plants per pot as technical replicates. Following the plant growth period, the above-ground plant material and root systems were carefully removed from the pots, bulk soil was discarded, and the rhizosphere soil adhering to the roots was collected aseptically. Finally, roots were discarded, and rhizosphere samples were frozen at −20 °C for further analysis . Genomic DNA was extracted from rhizosphere samples using the GeneMATRIX Soil DNA Purification Kit (Gdańsk, Poland). Prior to extraction, samples were homogenized with the TissueLyser LT (Qiagen, Hilden, Germany), using glass beads and the kit’s lysis buffer to ensure thorough mixing. Sequencing was carried out by Genomed (Warszawa, Poland), following a protocol outlined in earlier studies . Microbial communities in the samples were analyzed by sequencing the V3–V4 region of the 16S rRNA gene for bacteria and the ITS region for fungi and other eukaryotes. Amplification of these gene regions was performed using Illumina-compatible primers: ITS3F (GCATCGATGAAGAACGCAGC) and ITS4R (TCCTCCGCTTATTGATATGC) for the fungal/eukaryotic ITS region, and 341F (CCTACGGGNGGCWGCAG) and 805R (GACTACHVGGGTATCTAATCC) for the bacterial 16S rRNA region. Illumina adapter overhang sequences were added to these primers for compatibility with the Illumina platform (San Diego, CA, USA). The amplicons were then indexed using the Nextera ® XT Index Kit (San Diego, CA, USA), following the manufacturer’s protocol, and sequenced on an Illumina MiSeq (San Diego, CA, USA), in paired-end mode (2 × 250 bp). The resulting sequencing data, saved in FASTQ format, were uploaded to the Metagenomics Rapid Annotation Subsystems Technology (MG-RAST) ( https://www.mg-rast.org/ accessed on 10 April 2024) server for further analysis. Quality control steps included filtering out sequences with five or more ambiguous bases and those significantly deviating from the mean length (±2 standard deviations). Low-abundance sequences (singletons or those representing less than 0.0005% of total abundance) were removed from analysis. The sequences are accessible in the MG-RAST database under project ID mgp96669. Taxonomic diversity of the analyzed OTUs was assessed using several indices: Chao1 for species richness, Simpson’s dominance index (λ) for measuring species dominance, Shannon’s diversity index (H’) for overall diversity, and Pielou’s evenness index (J’) to evaluate species distribution. Diversity indices and rarefaction curves were generated using PAST version 4.13 . Dominance classes for bacterial and fungal communities were assigned based on previous studies by the authors of , respectively. The MACADAM database was utilized to infer microbial community functions, referencing the MetaCyc database for plant growth-promoting (PGP) properties and FAPROTAX for metabolic function analysis . Fungal trait analysis was conducted using the FungalTraits database . Microbiome dissimilarities were calculated using Agglomerative Hierarchical Clustering (AHC) based on the Bray–Curtis method, with dendrograms created using Ward’s method. Pearson-based Mantel tests were used to compare the similarity matrices of different biome types. Principal Component Analysis (PCA) was conducted using a Pearson similarity matrix to analyze microbiome data (OTUs > 2%), PGP traits, and predicted microbial physiological characteristics. Heat maps and bubble charts were created to represent predicted metabolic functions and PGP characteristics sourced from the MACADAM and FungalTraits databases. Partial least squares path modelling (PLS-PM) was applied to examine both direct and indirect effects of the studied variable groups. The model was built using standardized manifest variable weights, with centroid estimation applied for internal calculations. Correlations at a 0.001 significance level and model fit indices were calculated. Statistical analyses were performed in XLSTAT and PAST version 4.13 . Network analysis was carried out using Gephi 0.9 with AxisForce 2 algorithm based on an “n-standardized” data matrix . The statistical approach presented here was used in a previous study . This study investigated the effects of titanium dioxide (TiO 2 ) applied to soil in different particle forms—small-sized particles, medium-sized nanoparticles, and large nanoparticles—on the wheat rhizosphere microbiome and its functions. Our results indicate that TiO 2 in its various forms significantly alters the taxonomic structure of the wheat rhizosphere microbiota, with implications for soil ecosystem processes, particularly biogeochemical cycles and microorganism interactions. Notably, the application of large TiO 2 nanoparticles (TiO 2 NPs2) negatively impacted both the bacteriome and mycobiome. Conversely, medium-sized nanoparticles (TiO 2 NPs1) were associated with a beneficial restructuring of the microbiome, fostering an environment conducive to plant growth by promoting a heterotrophic strategy within the rhizobiome. These findings highlight the potential of medium-sized nanoparticles (approximately 60 nm in diameter) to enhance the microbiome’s functional capacity, which could lead to improved plant health and productivity. Our study also revealed that while TiO 2 nanoparticle treatments altered the microbiome’s structure, these changes did not necessarily result in adverse effects on the biochemical potential of the rhizosphere. In fact, medium-sized nanoparticles showed the most favourable impacts, boosting microbial diversity and increasing the abundance of microorganisms that antagonize plant pathogens. The comparison of different TiO 2 forms demonstrated distinct patterns in the microbial community, with the smaller nanoparticles fostering a more flexible and diverse microbiome, while larger particles resulted in a more oligotrophic environment. These observations suggest that the application of TiO 2 nanoparticles, particularly medium-sized ones, could play a role in promoting sustainable agricultural practises by enhancing microbial interactions in the rhizosphere. In summary, the results of this study provide valuable insights into the influence of TiO 2 nanoparticles on soil microbiomes and highlight their potential for improving crop production. Future research should focus on further elucidating the mechanisms underlying the interactions between nanoparticles and rhizosphere microorganisms, as well as their long-term impacts on soil health and ecosystem stability. |
Neurophysiological Evaluation of Neural Transmission in Brachial Plexus Motor Fibers with the Use of Magnetic versus Electrical Stimuli | 9201673b-aee4-4e81-b800-4315d8b3a127 | 10146775 | Physiology[mh] | The anatomical complexity of the brachial plexus and its often multilevel damage require specialized in-depth diagnostics. The purpose is to select the appropriate treatment, assess its effectiveness, and provide prognostic information about its course . Imaging of the brachial plexus, such as ultrasound or magnetic resonance imaging, provides important information about the nerve structures and surrounding tissues. Contemporary studies emphasize the importance of these tests, but they do not mention assessing the brachial plexus function . Besides the clinical examination , the diagnostic standard for brachial plexus function should include clinical neurophysiology tests. Electroneurography (ENG) studies are used to assess the function of motor fibers and peripheral sensory nerves. Somatosensory evoked potentials are used to evaluate afferent sensory pathways. Needle electromyography analyses the bioelectrical activity of the muscles innervated by peripheral nerves originating from the brachial plexus. The results of the tests above determine the extent, type, and severity of the damage. ENG of motor fibers uses a specific low-voltage electrical stimulus. It stimulates the nerve motor fibers, causing their depolarization, and the excitation spreads to the muscle, resulting in the generation of compound muscle action potential (CMAP). The strength of the electrical stimulus should be supramaximal, of sufficient intensity to generate CMAP with the highest amplitude and shortest latency. The CMAP amplitude reflects the number of conducting motor axons, and latency refers to the function of the myelin sheath and the rate of depolarization, mainly in fast-conducting axons . Despite the advantages of this type of stimulation, it has limitations due to the physical properties of the electrical stimulus. The main limitation is the inability to penetrate through the bone structures surrounding the brachial plexus in its proximal part, at the level of the spinal roots, at the spinal nerves in the neck, and often at Erb’s point. Stimulation at Erb’s point may be complicated due to the individual anatomy of the examined person, such as obesity, extensive musculature, or past injuries at this level. This can significantly affect the CMAP parameters and give false positive results indicating pathology of the assessed motor fibers. In contrast to ENG, magnetic stimulus is used to induce motor evoked potential (MEP) . Its use in brachial plexus diagnostics overcomes these limitations, which is of great clinical importance . The propagation of excitation along the axon and elicitation of motor potential using a magnetic stimulus is similar to electrical stimulation. However, as some authors indicate, the applied magnetic stimulus may be submaximal due to magnetic stream dispersion or insufficient power generated by the stimulation coil. Therefore, the assessment of MEP parameters may not reflect the actual number of excitable axons, and the interpretation of the results may incorrectly determine the functional status of the brachial plexus. An MEP study can provide important information regarding the location of the injury, especially in cases of traumatic damage to the brachial plexus where there may be multiple levels of impairment. The physical properties of the magnetic stimulus released from the generator device to penetrate bone structures would have to allow an assessment of the proximal part of the brachial plexus, especially at the level of the spinal roots. Scientific studies are mainly concerned with MEP efferent conduction studies in patients with disc–root conflict and other neurological disorders . Little attention has been paid to assessing the peripheral part of the lower motoneurone, including injuries of brachial plexus using MEP, studies of which may constitute the novum among the aims of the presented study. The main concern has been high-voltage electrical stimulation applied over the vertebrae . To the best of our knowledge, apart from studies by Schmid et al. and Cros et al. from 1990, this paper is one of the few sources of reference values. Therefore, it makes a practical contribution to the routine neurophysiological diagnosis of brachial plexus injuries. The aim of this study was to reinvestigate the hypothesis concerning the usefulness of the MEP test applied both over the vertebrae and at Erb’s point to assess the neural transmission of the brachial plexus motor fibers, with special attention to the functional evaluation of the short brachial plexus branches. The latter element has not been examined in detail ; most of the studies have been devoted to the evaluation of the long nerves, such as the median or ulnar. In addition, we formulated the following secondary goals: to compare the parameters of electrically evoked potentials (CMAP) with the parameters generated by magnetic stimulus (MEP), and to analyze whether these stimulation methods have compatible effectiveness and whether they could be used interchangeably during an examination. This would make it possible to select a method by taking into account the individual patient’s needs and the examination targets. Moreover, the additional aim of our work was to confirm that magnetic stimulation induces supramaximal potentials with the same parameters as during electrical stimulation, which was previously considered a methodological limitation . A further study aim was to confirm an assumption that magnetic stimulation is less painful than electrical stimulation and better tolerated by patients during neurophysiological examinations, which has never before been examined. 2.1. Study Design, Participants, and Clinical Evaluation Seventy-five volunteer subjects were randomly chosen to participate in the research. The ethical considerations of the study were compliant with the Declaration of Helsinki. Approval was granted by the Bioethical Committee of the University of Medical Sciences in Poznań, Poland (resolution no. 554/17). All the subjects signed a written consent form to voluntarily participate in the study without financial benefit. The consent included all the information necessary to understand the purpose of the study, the scope of the diagnostic procedures, and their characteristics. Before the study began, fifteen subjects declined to participate. The subjects in the study group (N = 60) were enrolled based on the results of clinical studies performed independently by a clinical neurophysiologist and a neurologist. The exclusion criteria included craniocerebral, cervical spine, shoulder girdle, brachial plexus, or upper extremity injuries and other systemic disorders under treatment. The contraindications to undergoing neurophysiological tests were pregnancy, stroke, oncological disorder, epilepsy, metal implants in the head or spine, and implanted cardiac pacemaker or cochlear implant because of the use of magnetic stimulation. The results were analyzed blindly, satisfying intra-rater reliability. The medical history and clinical studies consisted of evaluating the sensory perception of the upper extremities according to the C5-C8 dermatomes and peripheral nerve sensory distribution, based on von Frey’s monofilament method . The maximal strength of the upper extremity muscles was assessed using Lovett’s scale . A bilateral clinical examination of each volunteer was performed once. Based on the clinical examination and medical history, the neurologist classified the subjects in the research group as healthy volunteers. After excluding 14 participants who did not meet the inclusion criteria and declining 4 others during the neurophysiological exams, the final group included 42 subjects. The characteristics of the study group (N = 42) and a flowchart of the diagnostic algorithm proposed in this study are presented in and . There were 40 right-handed participants and only 2 left-handed. 2.2. Neurophysiological Examination All the participants were examined bilaterally once according to the same neurophysiological schedule. Each time, we used both magnetic and electrical stimuli to assess the function of the peripheral nerve and magnetic stimulus to evaluate neural transmission from the cervical spinal root. We applied stimulation three times at Erb’s point and at the selected level of the cervical segment, checking the repeatability of the evoked potential. The compound muscle action potentials (CMAP) recording during electroneurography (ENG) and motor evoked potential (MEP) induced by magnetic stimulation were analyzed. During the neurophysiological examination, the subjects were in a seated position, with relaxed muscles of the upper extremities and shoulder girdle, and in a quiet environment. The KeyPoint Diagnostic System (Medtronic A/S, Skøvlunde, Denmark) was used for the MEP and CMAP recordings. External magnetic stimulus for the MEP studies was applied by a MagPro X100 magnetic stimulator (Medtronic A/S, Skøvlunde, Denmark) via a circular coil (C-100, 12 cm in diameter) ( A,B). The strength of the magnetic field stream was 100% of the maximal stimulus output, which means 1.7 T for each pulse. The recordings were performed at an amplification of 20 mV/D and a time base of 5–8 ms/D. For the CMAP recording, a bipolar stimulation electrode and a single rectangular electric stimulus with a duration of 0.2 ms at 1 Hz frequency was used. The intensity of the electrical stimulus was 100 mA to evoke the supramaximal CMAP amplitude at Erb’s point. Such strength is obligatory and is determined by anatomical conditions and the fact that the nerve structures of the brachial plexus lie deep in the supraclavicular fossa. In the ENG studies, the time base was set to 5 ms/D, the sensitivity of recording to 2 mV/D, and 10 Hz upper and 10 kHz lower filters were used in the recorder amplifier. A bipolar stimulation electrode was used, the pools of which were moisturized with a saline solution (0.9% NaCl). The skin where the ground electrode and recording electrodes were placed was disinfected with a 70% alcohol solution; along with the conductive gel, this reduced the resistance between the skin and the recording sensors. The impedance did not exceed 5 kΩ. In the ENG examination, the bipolar stimulation electrode was applied at Erb’s point over the supraclavicular region, along an anatomical passage of the brachial plexus motor fibers. If repetitive CMAP with the shortest latency and the highest amplitude was evoked at this point, the spot became the starting point for the application of magnetic stimulation at this level (hot spot). To assess the MEP from the spinal roots of the cervical segment, the magnetic coil was applied 0.5 cm laterally and slightly below the spinous process in accordance with the anatomical location of the spinal roots (C5–C8). In this way, the cervical roots were selectively stimulated. For the recording of CMAP and MEP, standard disposable Ag/AgCl surface sensors with an active surface of 5 mm 2 were used in the same location for both electrical and magnetic stimulus. The active electrode was placed over the muscle belly innervated by the peripheral nerve, taking the origin from the superior, middle, and inferior trunk of brachial plexus. The same selected muscles also represented a specific root domain in accordance with the innervation of the upper extremity through the cervical segment of the spine. The reference electrode was placed distal to the active ones, depending on the muscle, i.e., on the olecranon or the tendon . A list of the tested muscles and their innervation (peripheral pathway and root domain), as well as the location of electrodes are given in . The same parameters were analyzed for both the CMAP and MEP recordings. The amplitude of the negative deflection (from baseline to negative peak, measured in mV), distal latency (DL) (from visible stimulating artefact to negative deflection of potential, measured in ms), and standardized latency (SL) were calculated by the equation SL = DL/LNS where LNS is the length of the nerve segment between the stimulation point (Erb’s point) and the recording area on the muscle (measured in cm). A reliable value of standardized latency depends on an accurate distance measurement. Therefore, a pelvimeter, which reduces the risk of error in measuring the distance between the stimulation point and the recording electrode, was used in the research. This makes it possible to consider the anatomical curvature of the brachial plexus nerves. The standardized latency indicates a direct correlation between latency and distance. This is important in assessing the conduction of the brachial plexus short branches with regard to various anthropometric features of the examined subjects, such as the length of the upper extremities relative to height. In standard neurophysiological tests of short nerve branches, the F wave is not assessed, hence the calculation of the root conduction time for nerves such as axillary, musculocutaneous, etc., is not possible. In order to assess conduction in the proximal part of these nerves, the value of standardized latency was also calculated (proximal standardized latency, PSL) using the following equation: PSL = (MRL − MEL)/D where MRL is the latency of MEP from the root level stimulation (measured in ms), MEL is the latency of MEP elicited from Erb’s point stimulation (measured in ms), and D is the distance between these two stimulation points (measured in cm). Therefore, the PSL value reflects the conduction between the cervical root and Erb’s point for each examined nerve. Distal latency and standardized latency correspond to speed conduction in the fastest axons. The amplitude of the recorded potentials and their morphology reflects the number of conducting motor fibers . After undergoing neurophysiological tests, the subjects reported which of the applied stimuli (electrical or magnetic) evoked a painful sensation, as scored on a 10-point visual analogue scale (VAS) . 2.3. Statistical Analysis The statistical data were analyzed using Statistica 13.3 software (StatSoft, Kraków, Poland) and are presented with descriptive statistics: minimal and maximal values (range), and mean and standard deviation (SD) for measurable values. The Shapiro–Wilk test was performed to assess the normality of distribution, and Leven’s test was used to define the homogeneity of variance in some cases. The results from the neurophysiological studies were compared to determine the differences between the sides (left and right), genders (female and male), stimulation techniques (electrical and magnetic), and stimulation areas (Erb’s point and cervical root). The changes in evoked the potential parameters between the groups of men and women were calculated with an independent Student’s t -test. In cases where the distribution was not normal, a Mann–Whitney U test was used. The dependent Student’s t -test (paired difference t -test) or Wilcoxon’s test (in the absence of distribution normality) was used to compare the differences between the stimulation methods, stimulation areas, and sides of the body. p -values less than 0.05 were considered statistically significant. The percentage of difference was expressed for each variable. An analysis of lateralization influence was not performed because there was only one left-handed volunteer. With regard to the results of the clinical tests, including pain measured by a 0–10 point visual analogue scale (VAS) and muscle strength measured by the 0–5 point Lovett’s scale, the minimum and maximum values (range) and mean and standard deviation (SD) are presented. At the beginning of the pilot study, statistical software was used to determine the required sample size using the amplitudes from the MEP and ENG recordings with a power of 80% and a significance level of 0.05 (two-tailed) as the primary outcome variable. The mean and standard deviation (SD) were calculated using the data from the first 10 patients of each gender, and the software estimated that at least 20 patients were needed as a sample size for the purposes of this study. Seventy-five volunteer subjects were randomly chosen to participate in the research. The ethical considerations of the study were compliant with the Declaration of Helsinki. Approval was granted by the Bioethical Committee of the University of Medical Sciences in Poznań, Poland (resolution no. 554/17). All the subjects signed a written consent form to voluntarily participate in the study without financial benefit. The consent included all the information necessary to understand the purpose of the study, the scope of the diagnostic procedures, and their characteristics. Before the study began, fifteen subjects declined to participate. The subjects in the study group (N = 60) were enrolled based on the results of clinical studies performed independently by a clinical neurophysiologist and a neurologist. The exclusion criteria included craniocerebral, cervical spine, shoulder girdle, brachial plexus, or upper extremity injuries and other systemic disorders under treatment. The contraindications to undergoing neurophysiological tests were pregnancy, stroke, oncological disorder, epilepsy, metal implants in the head or spine, and implanted cardiac pacemaker or cochlear implant because of the use of magnetic stimulation. The results were analyzed blindly, satisfying intra-rater reliability. The medical history and clinical studies consisted of evaluating the sensory perception of the upper extremities according to the C5-C8 dermatomes and peripheral nerve sensory distribution, based on von Frey’s monofilament method . The maximal strength of the upper extremity muscles was assessed using Lovett’s scale . A bilateral clinical examination of each volunteer was performed once. Based on the clinical examination and medical history, the neurologist classified the subjects in the research group as healthy volunteers. After excluding 14 participants who did not meet the inclusion criteria and declining 4 others during the neurophysiological exams, the final group included 42 subjects. The characteristics of the study group (N = 42) and a flowchart of the diagnostic algorithm proposed in this study are presented in and . There were 40 right-handed participants and only 2 left-handed. All the participants were examined bilaterally once according to the same neurophysiological schedule. Each time, we used both magnetic and electrical stimuli to assess the function of the peripheral nerve and magnetic stimulus to evaluate neural transmission from the cervical spinal root. We applied stimulation three times at Erb’s point and at the selected level of the cervical segment, checking the repeatability of the evoked potential. The compound muscle action potentials (CMAP) recording during electroneurography (ENG) and motor evoked potential (MEP) induced by magnetic stimulation were analyzed. During the neurophysiological examination, the subjects were in a seated position, with relaxed muscles of the upper extremities and shoulder girdle, and in a quiet environment. The KeyPoint Diagnostic System (Medtronic A/S, Skøvlunde, Denmark) was used for the MEP and CMAP recordings. External magnetic stimulus for the MEP studies was applied by a MagPro X100 magnetic stimulator (Medtronic A/S, Skøvlunde, Denmark) via a circular coil (C-100, 12 cm in diameter) ( A,B). The strength of the magnetic field stream was 100% of the maximal stimulus output, which means 1.7 T for each pulse. The recordings were performed at an amplification of 20 mV/D and a time base of 5–8 ms/D. For the CMAP recording, a bipolar stimulation electrode and a single rectangular electric stimulus with a duration of 0.2 ms at 1 Hz frequency was used. The intensity of the electrical stimulus was 100 mA to evoke the supramaximal CMAP amplitude at Erb’s point. Such strength is obligatory and is determined by anatomical conditions and the fact that the nerve structures of the brachial plexus lie deep in the supraclavicular fossa. In the ENG studies, the time base was set to 5 ms/D, the sensitivity of recording to 2 mV/D, and 10 Hz upper and 10 kHz lower filters were used in the recorder amplifier. A bipolar stimulation electrode was used, the pools of which were moisturized with a saline solution (0.9% NaCl). The skin where the ground electrode and recording electrodes were placed was disinfected with a 70% alcohol solution; along with the conductive gel, this reduced the resistance between the skin and the recording sensors. The impedance did not exceed 5 kΩ. In the ENG examination, the bipolar stimulation electrode was applied at Erb’s point over the supraclavicular region, along an anatomical passage of the brachial plexus motor fibers. If repetitive CMAP with the shortest latency and the highest amplitude was evoked at this point, the spot became the starting point for the application of magnetic stimulation at this level (hot spot). To assess the MEP from the spinal roots of the cervical segment, the magnetic coil was applied 0.5 cm laterally and slightly below the spinous process in accordance with the anatomical location of the spinal roots (C5–C8). In this way, the cervical roots were selectively stimulated. For the recording of CMAP and MEP, standard disposable Ag/AgCl surface sensors with an active surface of 5 mm 2 were used in the same location for both electrical and magnetic stimulus. The active electrode was placed over the muscle belly innervated by the peripheral nerve, taking the origin from the superior, middle, and inferior trunk of brachial plexus. The same selected muscles also represented a specific root domain in accordance with the innervation of the upper extremity through the cervical segment of the spine. The reference electrode was placed distal to the active ones, depending on the muscle, i.e., on the olecranon or the tendon . A list of the tested muscles and their innervation (peripheral pathway and root domain), as well as the location of electrodes are given in . The same parameters were analyzed for both the CMAP and MEP recordings. The amplitude of the negative deflection (from baseline to negative peak, measured in mV), distal latency (DL) (from visible stimulating artefact to negative deflection of potential, measured in ms), and standardized latency (SL) were calculated by the equation SL = DL/LNS where LNS is the length of the nerve segment between the stimulation point (Erb’s point) and the recording area on the muscle (measured in cm). A reliable value of standardized latency depends on an accurate distance measurement. Therefore, a pelvimeter, which reduces the risk of error in measuring the distance between the stimulation point and the recording electrode, was used in the research. This makes it possible to consider the anatomical curvature of the brachial plexus nerves. The standardized latency indicates a direct correlation between latency and distance. This is important in assessing the conduction of the brachial plexus short branches with regard to various anthropometric features of the examined subjects, such as the length of the upper extremities relative to height. In standard neurophysiological tests of short nerve branches, the F wave is not assessed, hence the calculation of the root conduction time for nerves such as axillary, musculocutaneous, etc., is not possible. In order to assess conduction in the proximal part of these nerves, the value of standardized latency was also calculated (proximal standardized latency, PSL) using the following equation: PSL = (MRL − MEL)/D where MRL is the latency of MEP from the root level stimulation (measured in ms), MEL is the latency of MEP elicited from Erb’s point stimulation (measured in ms), and D is the distance between these two stimulation points (measured in cm). Therefore, the PSL value reflects the conduction between the cervical root and Erb’s point for each examined nerve. Distal latency and standardized latency correspond to speed conduction in the fastest axons. The amplitude of the recorded potentials and their morphology reflects the number of conducting motor fibers . After undergoing neurophysiological tests, the subjects reported which of the applied stimuli (electrical or magnetic) evoked a painful sensation, as scored on a 10-point visual analogue scale (VAS) . The statistical data were analyzed using Statistica 13.3 software (StatSoft, Kraków, Poland) and are presented with descriptive statistics: minimal and maximal values (range), and mean and standard deviation (SD) for measurable values. The Shapiro–Wilk test was performed to assess the normality of distribution, and Leven’s test was used to define the homogeneity of variance in some cases. The results from the neurophysiological studies were compared to determine the differences between the sides (left and right), genders (female and male), stimulation techniques (electrical and magnetic), and stimulation areas (Erb’s point and cervical root). The changes in evoked the potential parameters between the groups of men and women were calculated with an independent Student’s t -test. In cases where the distribution was not normal, a Mann–Whitney U test was used. The dependent Student’s t -test (paired difference t -test) or Wilcoxon’s test (in the absence of distribution normality) was used to compare the differences between the stimulation methods, stimulation areas, and sides of the body. p -values less than 0.05 were considered statistically significant. The percentage of difference was expressed for each variable. An analysis of lateralization influence was not performed because there was only one left-handed volunteer. With regard to the results of the clinical tests, including pain measured by a 0–10 point visual analogue scale (VAS) and muscle strength measured by the 0–5 point Lovett’s scale, the minimum and maximum values (range) and mean and standard deviation (SD) are presented. At the beginning of the pilot study, statistical software was used to determine the required sample size using the amplitudes from the MEP and ENG recordings with a power of 80% and a significance level of 0.05 (two-tailed) as the primary outcome variable. The mean and standard deviation (SD) were calculated using the data from the first 10 patients of each gender, and the software estimated that at least 20 patients were needed as a sample size for the purposes of this study. The research group was homogeneous in terms of age. We found statistically significant differences between the women and men concerning height, weight, and BMI . In the clinical study, the Lovett’s muscle strength score was found to be 5 on average for both men and women. This cumulative result applies to all assessed muscles bilaterally, i.e., deltoid, biceps brachii, triceps brachii, and abductor digiti minimi, and reflects the proper maximal muscle contraction against the applied resistance. The results of the sensory perception studies of the upper extremities, according to dermatomes C5–C8, were about the normal outcomes in the study group. There were no significant differences in the CMAP and MEP between the right and left sides among women (N = 21) and men (N = 21). Hence, further comparative analysis of CMAP and MEP between the two groups refers to the cumulative number of tests performed (N = 42). The results are presented in . The significantly prolonged latency of evoked potential in the men compared to the women is related to the greater distance between the stimulation point and the recording level, due to anthropometric features such as the length of the extremities, which are longer in men. However, this does not determine the value of standardized latency reflecting conduction in a particular segment. These values are comparable in the two groups for both types of stimulation (electrical and magnetic) and levels of stimulation (Erb’s point and cervical root) with generally no statistical differences. The exception is the C5 spinal root and Erb’s point stimulation (both electrical and magnetic) for the radial nerve. In the cases above, the standardized latency was significantly longer in the group of men. However, the percentage difference is only 8–11% and the numerical difference is only about 0.02 ms/cm, and these differences are not clinically significant. Similarly, there were significant differences in the amplitude of evoked potentials between women and men. In the assessment of the musculocutaneous nerve, CMAP and MEP generated from Erb’s point showed higher values in the men, while those generated from the ulnar nerve had higher values in the women. The difference is also between 10 and 16%, without clinical significance, and may have resulted from a measurement error, such as the cursor setting during the analysis of potentials. Because the conduction parameters in the groups of women and men were comparable, further statistical analysis was conducted on 84 tests (both groups were combined). The parameters of potentials generated by electrical stimulus (CMAP) were compared with those of potentials generated by magnetic impulse (MEP). Stimulation in both cases was applied at Erb’s point. The data are presented in and . The amplitude of CMAP was significantly higher after electrical stimulation than MEP after magnetic stimulation for all the examined nerves, in the range of 3–7%. This may have been due to the wider dispersion of electrical stimulation according to the rule of electrical field spread. The latency of the evoked potentials was significantly shorter after magnetic stimulation, which is related to the shorter standardized latency. Note that the difference in potential latency values using the two types of stimulation did not exceed 5%. This may be a result of the deeper and more selective penetration of magnetic impulses into tissues (based on the rule of magnetic field spread) and through the bone structures, and, thus, faster depolarization of the brachial plexus fibers. presents examples of CMAP and MEP recordings following electrical and magnetic stimulation at Erb’s point. The repeatability of the morphology of potentials with the use of both types of excitation is noteworthy. The brachial plexus trunks are stimulated at Erb’s point in the supraclavicular area. In the area over the vertebrae, the spinous processes of the vertebrae are points of reference for the corresponding spinal root locations. In the cervical spine, according to the anatomical structure, the spinal roots emerge from the spinal cord above the corresponding numbered vertebrae. A,B presents magnetic coil placements during the MEP study, while gives data results. The results show significantly higher amplitudes of the potentials after stimulation of the cervical roots compared to the potentials evoked at Erb’s point for C5 and C6. In the case of C8, the amplitude was lower than the potentials evoked at Erb’s point. It should be noted, however, that these values varied in the range of 9–16%, which, as explained above, is not clinically relevant. We also note the comparable values of proximal standardized latency (PSL) in the cervical root–Erb’s point segment for all the stimulated nerves. presents the MEP recordings after magnetic stimulation of the C5 to C8 cervical spinal roots. The MEPs recorded from the cervical roots have a repetitive and symmetrical morphology. The MEPs have a lower amplitude at the C8 level than in the other studied segments (see and ). After undergoing neurophysiological tests, the subjects indicated the degree of pain sensation during stimulation according to a 10-point visual analogue scale (VAS) (see ). The results indicate that they felt more pain or discomfort during electrical stimulation. The subjects described it as a burning sensation. They also indicated that magnetic stimulation was perceptible as the feeling of being hit, causing a more highly expressed motor action (contraction of the muscle as the effector of the stimulated nerve). Neuroimaging and basic clinical examinations of sensory perception and muscle strength are still the primary approaches for evaluating brachial plexus injury symptoms . Neurophysiological diagnostics is considered supplementary, with the aim of confirming the results of the clinical evaluation. The main novelty of the present study is that it proves the similar importance of magnetic and peripheral electrical stimulation over the vertebrae in evaluating the functional status of brachial plexus motor fiber transmission. The pros of our research are the neurophysiological assessment of the function of brachial plexus short branches, which are part of its trunks. Our studies prove the similarity of results obtained with the two mentioned methods following the excitation of nerve structures at Erb’s point. The latency and amplitude values of the potentials (CMAP, MEP) evoked at this level by two types of stimuli differed in the range of 2–7%. In routine diagnostic tests, this range of difference would not significantly affect the interpretation of the results of neurophysiological tests. Hence, we conclude that magnetic and electrical stimuli could be used interchangeably during an examination. We also proved that the range of excitation of motor fibers by a magnetic impulse may be supramaximal due to the stable and comparable MEP and CMAP amplitudes. The properties of supramaximal motor potential with the shortest latency were, in previous studies, attributed to the effects of electrical stimulation, which is commonly used in neurophysiological research. Many authors pointed to the limited diagnostic possibilities of the magnetic stimulus , the pros of which were examined in detail in this paper. This is crucial because of the different anthropometric features of patients and the possible extent of damage to the structures surrounding the brachial plexus. Past fractures, swelling, or post-surgical conditions at this level may limit the excitation of axons by electrical stimulus. The benefit of magnetic-induced MEP is that it is less invasive than electrical stimulation, as concluded from the VAS pain scores (see ). The movement artifact associated with the magnetic stimulation may influence the quality of the MEPs recording, which should be considered during the interpretation of the diagnostic test results . MEP studies allow evaluation of the proximal part of the peripheral motor pathway, between the cervical roots, contrary to low-voltage electrical stimulation. The comparable amplitudes of MEPs induced by magnetic stimulus recorded over the vertebrae with those recorded at Erb’s point, as shown in our study, could be the basis for the diagnosis of a conduction block in the area between the spinal root and Erb’s point. By definition, in a neurophysiological examination, conduction block is considered to have occurred when the amplitude of the proximal potential is reduced by 50% relative to the distal potential. In the opinion of Öge et al. , the amplitude of evoked potentials induced by stimulation of the cervical roots compared with potentials recorded distally using electrical stimulation may help to reveal a possible conduction block at this level. According to Matsumoto et al. , the constant latency of MEP induced by magnetic stimulation of the cervical roots was comparable with potentials induced by high-voltage electrical stimulation. In our opinion, similar to the method mentioned above, combining two research techniques using magnetic stimulation of the cervical roots or Erb’s point and conventional peripheral electrical stimulation is valid for neurophysiological assessment of the brachial plexus. Previous studies on a similar topic by Cros et al. involving healthy subjects revealed parameters of MEPs recorded from proximal and distal muscles of the upper extremities with the best “hot spots” from C4–C6 during stimulation over the vertebrae. They found that the root potentials were characterized by similar latencies, while the amplitudes recorded from the abductor digiti minimi muscle were the lowest following excitation at the C6 neuromere, contrary to our study, in which they were evoked the most effectively but with the smallest amplitudes following stimulation at C8 (see ). We similarly recorded the largest amplitudes for MEPs evoked from the proximal muscles of the upper extremity. However, our study only involved magnetic stimulation over the vertebrae and not electrical stimulation, which was considered painful. In another study by Schmid et al. , magnetic excitation over the vertebrae at C7-T1 evoked MEPs with smaller amplitudes from distal muscles than proximal muscles compared to high-voltage electrical stimulation applied to the same area. Similar to our study, for MEPs following magnetic versus low-voltage electrical stimulation at Erb’s point, latencies were shorter and amplitudes were smaller, and the morphology was the same (see and ). The standardized latencies were comparable for both types of stimulation, which was not reported by Schmid et al. . In our opinion, when interpreting the results of neurophysiological tests of the brachial plexus, the reference values show a trend in terms of whether the parameters of the recorded potentials are within the normal range or indicate pathology . When interpreting the results, special consideration should be given to comparing them with the asymptomatic side, which is the reference for the recorded outcome on the damaged side . The results of the present study can be directly transferred to the clinical neurophysiology practice, due to the possibility of using two different stimuli in diagnostics to evoke the potentials with the same parameters that are recorded by non-invasive surface sensors. Magnetic stimulation appears to be less painful due to the non-excitation of the afferent component, contrary to electrical stimulation, where antidromically excited nociceptive fibers may be involved . One of the study limitations that may have influenced the results, especially the parameters of latencies of potentials, was the anthropometric differences between women and men included in the study group. However, the gender proportions were equal, making the whole population of participants typical for European countries. Considering the number of participants examined in this study, it should be mentioned that due to comparable conduction parameters in the groups of women and men, the final statistical analysis covered 84 tests to compare the parameters of potentials evoked with electrical or magnetic impulses. Moreover, as mentioned in , at the beginning of the pilot study, statistical software was used to determine the required sample size, and it was estimated that at least 20 patients were needed for the purposes of this study. This study reveals that the parameters of evoked potentials in CMAP and MEP recordings from the same muscles after the application of magnetic and electrical stimuli applied to the nerves of the brachial plexus are comparable. Magnetic field stimulation is an adequate technique that enables the recording of supramaximal potential (instead of the submaximal, which was reported in other studies ), which is the result of stimulation of the entire axonal pool of the tested motor path, similar to testing with an electric stimulus. We found that the two types of stimulation can be used interchangeably during an examination, depending on the diagnostic protocol for the individual patient, and the parameters of evoked potentials can be compared. Moreover, in the case of patients sensitive to stimulation with an electric field, which is considered to cause pain in neurophysiological diagnostics, it is crucial to have the possibility of changing the type of stimulus. Magnetic stimulus is painless in comparison with electrical stimulus. We can conclude that the use of magnetic stimulation makes it possible to eliminate diagnostic limitations resulting from individual anatomical conditions or anthropometric features (such as large muscle mass or obesity). MEP studies allow us to evaluate the proximal part of the peripheral motor pathway (between the cervical root level and Erb’s point, and via trunks of the brachial plexus to the target muscles) following the application of stimulus over the vertebrae, which is the main clinical advantage of this study. It may be of particular importance in the case of damage to the proximal part of the brachial plexus. As a study of brachial plexus function, MEP should be compared to imaging studies in order to obtain full data on the patient’s functional and structural status. |
Краткий обзор клинических руководств Европейского кардиологического общества 2023 года по лечению сердечно-сосудистых заболеваний у пациентов с сахарным диабетом | ff53d9ae-c11f-481c-859e-55c8b2d6c280 | 11551801 | Internal Medicine[mh] | Авторы документа подчеркивают огромную значимость изменения образа жизни и утверждают, что у пациентов с СД2 именно модификация образа жизни является ключевой и обязательной для снижения риска ССЗ. В качестве рекомендации класса IA говорится, что лицам с ожирением или избытком массы тела нужно снижать вес и повышать физические нагрузки. Другие лечебные воздействия также могут быть использованы, но имеют меньшую значимость: использование агонистов рецепторов ГПП-1 или бариатрическая хирургия (класс IIaB). В отношении оптимизации питания пациентам с СД2 для снижения риска ССЗ рекомендовано (класс IA) использовать адаптированную средиземноморскую или растительную диету (пищевой рацион, богатый усвояемыми пищевыми волокнами) с высоким содержанием ненасыщенных жиров. Эксперты также рекомендуют обязательное увеличение физической активности (хотя бы в виде 10-минутной пешей прогулки), упоминая, что оптимальная умеренно-интенсивная нагрузка должна длиться 150 мин, а интенсивная — 75 минут в неделю (IA), и говоря о необходимости выполнения нагрузок с сопротивлением по крайней мере 2 раза в неделю (IB). Для пациентов с уже имеющимся ССЗ, в том числе с ишемической болезнью сердца (ИБС), любым типом СН, ФП, рекомендовано использовать структурированные физические упражнения, то есть разделение занятий на отдельные сегменты и предварительное их планирование (IB). Всем курящим пациентам с СД2 для снижения риска ССЗ показано обязательное прекращение курения (IA). В качестве возможной помощи для достижения этой цели необходимо рассмотреть использование никотин-заместительной терапии, применение варениклина или бупропиона, а также индивидуальные или дистанционные консультации (IIaB). Рекомендован контроль артериального давления (желательны измерение АД на каждом визите и использование гипотензивных препаратов) при офисном АД≥140/90 мм рт.ст. (IA). Целевые значения АД при лечении гипертонии у пациентов с СД рекомендовано определять индивидуально и стремиться к систолическому АД <130 мм рт.ст., если это хорошо переносится, но при этом избегать снижения <120 мм рт.ст., а для лиц старше 65 лет целевым САД может считаться значение 130–139 мм рт.ст. (IA). Важнейшая часть рекомендаций — коррекция нарушений липидного спектра у пациентов с СД. Эксперты подтвердили прежние «европейские» целевые значения в отношении холестерина ЛПНП: для пациентов с умеренным риском ССЗ — <2,6 ммоль/л (IA), высоким риском — <1,8 ммоль/л и снижение ЛПНП по крайней мере на 50% от исходного (IA), а при очень высоком риске — <1,4 ммоль/л и снижение ЛПНП по крайней мере на 50% от исходного (IB). Медикаментозными препаратами первой линии для достижения целевых значений ЛПНП у пациентов с СД являются статины (IA), а в случае, если максимальные или максимально переносимые дозы статина не позволяют достичь целевых значений, рекомендовано добавление эзетемиба (IB). Пациентам с очень высоким сердечно-сосудистым риском со стабильно высоким уровнем ЛПНП, выше целевого несмотря на прием максимально переносимой дозы статина, в сочетании с эзетимибом или пациентам с непереносимостью статинов, рекомендовано назначение ингибитора PCSK9 (IA). Вопрос целевого уровня гликемии у пациентов с СД довольно сложный, так как, с одной стороны, данные исследований показывают связь более интенсивного лечения с лучшими микроваскулярными исходами (в отношении нейро- и ретинопатии) , с другой стороны, более низкие целевые уровни HbA1c могут быть ассоциированы с повышением смертности от сердечно-сосудистых причин . Поэтому, формулируя рекомендации по целевым уровням гликемии, экспертная группа ЕКО была очень осторожна в заключениях. Можно выделить следующие ключевые положения: Эксперты указывают, что для снижения отдаленного риска ИБС следует предпочесть более строгий контроль гликемии, а в лечении желательно отдавать предпочтение препаратам с доказанной сердечно-сосудистой пользой [IIaB]. Опираясь на накопившиеся данные рандомизированных исследований, эксперты ЕКО утверждают, что выбор гипогликемической терапии должен быть в первую очередь основан на преимуществах в предотвращении ССЗ, которые это лечение принесет. Поэтому назначение ингибитора SGLT2 или агониста ГПП-1 для пациентов с СД должно быть также обязательно, как и, например, назначение статина или ингибитора АПФ/блокатора АТ-рецепторов. Для пациентов с СД и имеющимся АССЗ назначение этих препаратов отнесено к классу I (рис. 2). Важным аспектом является также то, что у пациентов с СД2 и известным АССЗ лечение должно начаться с агониста ГПП-1 и/или ингибитора SGLT2 — препаратов с доказанной эффективностью снижения риска ССЗ — независимо от уровня HbA1c и сопутствующего приема сахароснижающих препаратов (класс I) (рис. 3). Для улучшения гликемического контроля эксперты предлагают рассмотреть возможность добавления метформина (IIa) и пиоглитазона (IIb). В отношении использования антиагрегантов, антикоагулянтов или их сочетания у пациентов с СД2 положения обновленных клинических руководств мало изменились по сравнению с предыдущими регламентирующими документами. Так, например, эксперты указывают на возможность обсудить использование ацетилсалициловой кислоты (АСК) в низкой дозе (75–100 мг) у пациентов с СД2 для первичной профилактики, если к этому нет явных противопоказаний (IIbA). Такими противопоказаниями считается желудочно-кишечное кровотечение или язва желудка в предыдущие 6 мес, активная болезнь печени (цирроз или активный гепатит) или непереносимость АСК. Программа ASCEND , где изучалось использование АСК в крупном РКИ в отношении предотвращения ССЗ и которая, строго говоря, была единственным крупным и прямым рандомизированным исследованием, посвященным этой теме, показала преимущество АСК над плацебо. Однако оно проявлялось в первые 3 года лечения, а в последующие годы различий в исходах между группами АСК и плацебо не было, частота больших кровотечений (в основном желудочно-кишечных) на фоне АСК была значимо выше, а NNT и NNH для АСК против плацебо почти не различались, и были 91 и 111 соответственно. Небольшие изменения получила рекомендация об использовании ингибиторов протонной помпы (ИПП) в сочетании с антитромботическими препаратами. Указано, что любая комбинация антитромботических средств требует обязательного одновременного использования ИПП для профилактики желудочно-кишечных кровотечений (класс IA). При этом эксперты не рекомендуют использовать клопидогрел одновременно с омепразолом и эзомепразолом (класс IIIB), и это положение вызывает вопросы. Во-первых, потому что работа, которая стала основанием для этой рекомендации, была лишь анализом медицинских баз данных, а во-вторых, потому что она показала, что повышенный риск сердечно-сосудистых осложнений встречался при сочетании клопидогрела с омепразолом, ланзопразолом, эзомепразолом и пантопразолом, но не рабепразолом . Проблема коррекции гликемии у пациентов с острым коронарным синдромом (ОКС) всегда была актуальной, а результаты нескольких исследований с противоположными результатами оставляют этот вопрос скорее открытым, чем решенным. Большинство имеющихся данных говорят о том, что наилучшей тактикой в отношении гликемии в первые часы ОКС является умеренно строгий контроль (стремление к целевым показателями 10–11 ммоль/л), при возможности избегая гипогликемии. Такой подход, например, был связан с лучшими исходами по сравнению с более жестким контролем гликемии в исследовании у пациентов с тяжелыми заболеваниями в реанимационных отделениях NICE-SHUGAR . При этом авторы Клинических руководств подчеркивают, что у всех пациентов с ОКС обязательно нужно оценить исходный гликемический статус (класс IB) и часто его мониторировать, выявляя гипергликемию, то есть уровень глюкозы выше 11,0 ммоль/л (класс IC), а также рассмотреть необходимость использования сахароснижающей терапии, избегая гипогликемии (класс IIaC). В отношении того, как именно нужно корректировать уровень глюкозы крови при ОКС, детально не указывается, но в большинстве случаев при значительной гипергликемии предлагается использовать инфузию инсулина. Хотя недавнее исследование EMMY показало, что применение эмпаглифлозина в ранние сроки острого инфаркта миокарда может быть связано с улучшением уровня NT-proBNP и функции левого желудочка . За последние годы наиболее масштабные изменения в лечении ССЗ коснулись именно лечения пациентов с сердечной недостаточностью (СН), и именно «на стыке» с лечением СД. Речь идет о препаратах из группы ингибиторов SGLT2 (дапаглифлозине и эмпаглифлозине), которые уже несколько лет являются препаратами «первой линии» в лечении пациентов с СН и сниженной фракцией выброса левого желудочка (ФВЛЖ) . Более поздние исследования показали возможность улучшения исходов, связанную с приемом эмпаглифлозина и дапаглифлозина у пациентов с сохранной ФВЛЖ. Тот факт, что изначально сахароснижающие препараты из группы ингибиторов SGLT2 у пациентов с СН эффективны вне зависимости от наличия или отсутствия СД, лишает положения рекомендаций по лечению СН специфичности для пациентов с СД. В отношении же лечения СД у пациентов с СН или риском развития СН эксперты рекомендуют назначать в первую очередь препараты с доказанной пользой в отношении СС-исходов (класс IA) либо при необходимости менять сахароснижающую терапию, переходя от нейтральных или потенциально опасных в отношении СС-исходов препаратов к тем, для которых доказана СС-польза (класс IC). Ключевые положения руководств указывают на обязательное использование одного из ингибиторов SGLT2 (дапаглифлозина или эмпаглифлозина) или ингибитора SGLT2/1 сотаглифлозина (препарат пока не зарегистрирован в РФ) у пациентов с СН и низкой ФВЛЖ (класс IA), а также на обязательное использование эмпаглифлозина или дапаглифлозина у пациентов с СН и ФВЛЖ>40% (класс IA). В отношении других групп сахароснижающих препаратов эксперты ЕКО считают возможным использование агонистов ГПП-1 (ликсисенатид, лираглутид, семаглутид и др.), как и ингибиторов ДПП-4 (ситаглиптин и линаглиптин), не влияющих на риски СН и ее осложнений у пациентов с СД, имеющих СН или риск развития СН (класс IIaA). Схожие рекомендации даны в отношении использования метформина и базальных инсулинов (гларгин и деглудек) (класс IIaB). К сахароснижающим препаратам, которые не рекомендованы у пациентов с СН или риском развития СН, эксперты отнесли производное тиазолидиндиона пиоглитазон, а также ингибитор ДПП-4 саксаглиптин из-за повышенного риска развития СН и госпитализации из-за СН у пациентов с СД2 (класс IIIB). В отношении саксаглиптина основанием для нежелательности его использования у пациентов с риском СН стали результаты исследования SAVOR-TIMI-53, где анализ вторичных конечных точек показал связь препарата с большим риском развития СН, что особенно бросалось в глаза на фоне отсутствия какой-либо прогностической пользы . На рис. 4 схематично представлены общие принципы применения сахароснижающей терапии у пациентов с СН и СД2. Строго говоря, эксперты ЕКО не дают каких-либо специфических рекомендаций в отношении ведения пациентов с СД и фибрилляцией предсердий (ФП). Однако, с учетом того, что наличие СД является фактором риска тромбоэмболических осложнений и компонентом шкалы CHADS-VAsc, предлагается рассмотреть использование орального антикоагулянта для профилактики инсульта у пациентов с ФП и СД, но без других факторов тромбоэмболического риска (класс IIaB). Функциональное нарушение почек является важной проблемой для пациентов с СД, так как почки, с одной стороны, являются одним из «органов-мишеней», а с другой — поражение почек (в том числе в связи с СД) связано с большим риском развитий ССЗ . Поэтому в тексте Клинических руководств акцент в лечении диабета делается не на коррекцию гликемии как таковой, а на снижение рисков развития ССЗ и хронической болезни почек (ХБП). Пациентам с СД рекомендовано проводить рутинный скрининг функции почек с оценкой СКФ по критериям CKD-EPI и оценивать альбумин-креатининовое соотношение в моче (класс IВ). В рекомендациях для пациентов с СД и ХБП эксперты ЕКО указывают на обязательную интенсивную гиполипидемическую терапию статинами или сочетанием статин+эзетемиб, что особенно важно с учетом того, что ХБП является самостоятельным фактором риска развития ССЗ (класс IA). Хотя для пациентов с тяжелой ХБП на диализе польза от интенсивной гиполипидемической терапии менее очевидна. Подчеркнута необходимость строгого контроля за уровнем АД с достижением его целевых значений не выше 130/80 мм рт.ст. для снижения риска ССЗ и альбуминурии (класс IA). Эксперты указывают на индивидуальный выбор целевого уровня HbA1c — от 6,5 до 8,0%, но с желательным достижением уровня HbA1c<7,0% для снижения риска микрососудистых осложнений (класс IA). Отмечается обязательное использование максимально переносимых доз ингибитора АПФ или блокатора АТ-рецепторов (класс IA), а всем пациентам с СД2 и ХБП с СКФ≥20 мл/мин/1,73 м² необходимо назначение ингибитора SGLT2 (канаглифлозин, эмпаглифлозин или дапаглифлозин) для снижения риска ССЗ и почечной недостаточности (класс IA). Для достижения адекватного контроля гликемии, снижения риска гипогликемии и пользы от снижения массы тела, снижения риска развития ССЗ и альбуминурии у пациентов с СКФ выше 15 мл/мин/1,73 м² эксперты рекомендуют использовать агонисты ГПП-1 (класс IA) (рис. 5). Одно из новшеств этих руководств — препарат финеренон, использование которого рекомендовано экспертами (класс IA) в дополнение к ингибитору АПФ/блокатору АТ-рецепторов у пациентов с ХБП и альбуминурией (повышенное альбумин-креатининовое соотношение в моче) для снижения риска ССЗ и почечной недостаточности. Финеренон является новым нестероидным селективным антагонистом минералокортикоидных рецепторов. В двойном слепом исследовании FIDELIO-DKD у >5,5 тыс пациентов с СД2 и ХБП, рандомизированных к приему финеренона или плацебо, за 2,5 года частота неблагоприятных событий (ухудшение функции почек в виде снижения СКФ≥40% и смерть от почечных причин) на фоне приема финеренона была значимо меньше: 17,8 vs. 21,1% (относительный риск 0,82; 95% доверительный интервал 0,73–0,93; р=0,001) . Для пациентов с СД, ХБП и ИБС эксперты ЕКО рекомендуют в равной степени использовать консервативную тактику с оптимальной медикаментозной терапией или исходную инвазивную тактику из-за их неразличимых исходов (класс IB). Такая стратегия лечения предполагает не только полное информирование пациентов в отношении выбора лечения и принятия решений, но также помогает расширить возможности пациентов активно участвовать в поиске решений их проблем. В рамках этого подхода рекомендовано предоставлять пациентам с СД структурированные образовательные программы для улучшения знаний о диабете, контроля гликемии, предотвращения осложнений, о правах и возможностях пациентов (класс IC). Также рекомендовано принимать решения в контексте целей и приоритетов пациента (класс IC). В заключение обзора Клинических руководств ЕКО по лечению ССЗ у пациентов с СД надо сказать, что эта проблема остается актуальной и требует одновременного участия в судьбе одного пациента нескольких специалистов. Помимо кардиолога и эндокринолога, в эту работу нередко оказываются вовлечены эндоваскулярные хирурги, нефрологи, специалисты по питанию и реабилитации. В любом случае важно понимать, что в ежедневной клинической практике более строгое следование клиническим руководствам всегда ассоциировано с лучшими исходами. Тем более, что в новых Руководствах есть положения, основанные на доказательствах улучшения прогноза, что особенно важно для клиницистов и практикующих врачей. Источники финансирования. Работа выполнена по инициативе авторов без привлечения финансирования. Конфликт интересов. Авторы декларируют отсутствие явных и потенциальных конфликтов интересов, связанных с содержанием настоящей статьи. Участие авторов. Все авторы одобрили финальную версию статьи перед публикацией, выразили согласие нести ответственность за все аспекты работы, подразумевающую надлежащее изучение и решение вопросов, связанных с точностью или добросовестностью любой части работы. |
Catastrophic ocular complications in leprosy: a case report | dd16e6d7-232a-4d9a-8eb9-7f4b02fd9a3e | 10362652 | Ophthalmology[mh] | Leprosy, also called Hansen´s disease, is a chronic infective granulomatous disease caused by Mycobacterium leprae . With the implementation of MDT (multidrug therapy) in 1981, the rate of cure and prevention of disabilities has certainly reduced globally . Ocular involvement in leprosy is 70-75%, about 10-50% of leprosy patients suffer from severe ocular symptoms and blindness occurs in about 5% of patients . Leprosy is considered as a preventable cause of blindness. Leprosy is a common cause of physical disabilities, which leads to social stigma and isolation. Although MDT has reduced the disabilities caused by leprosy, its contribution in reducing the incidence of reactions and subsequent nerve damage has yet to be confirmed. Hitherto, strategies to prevent disabilities in old cured patients and reduce their incidence in the newly diagnosed are still lacking. Patient information: a 76-year-old farmer presented with loss of vision in both eyes and amputated digits for the past twenty years. On enquiry, he revealed that he was diagnosed case of leprosy but was irregular with antileprosy drugs and visited multiple healthcare centers for his complaints. Over the past twenty years, he had episodes of redness, pain, watering in both the eyes, for which he had taken treatment from a local practitioner. Clinical findings: ocular examination of both eyes revealed visual acuity with no perception of light, wide palpebral aperture, complete loss of eyebrows and eyelashes, ectropion of upper and lower lid and extraocular movement restriction in all gazes. There was keratinization of bulbar and palpebral conjunctiva, and anterior staphyloma in bilateral eyes . General examination of the patient revealed collapsed nasal bridge, resorption of multiple fingers , and toes . Timeline of current episode: the patient has been suffering from loss of vision due to leprosy for about twenty years. Over the due course of time, the digits of his hands and toes got amputated. Diagnosis: the patient was a diagnosed case of leprosy. Therapeutic interventions: no therapeutic intervention was provided in the department of dermatology, as the patient was a burnt-out case of leprosy. Follow-up and outcome of interventions: the patient was advised enucleation followed by implantation of orbital implant and orbital prosthesis for cosmetic correction in the department of ophthalmology but got lost to follow up. Informed consent: written informed consent was obtained from the patient. Diagnostic assessment: peripheral nerve examination revealed thickened bilateral ulnar, bilateral median, bilateral radial cutaneous, left lateral popliteal and left anterior tibial nerves. Sensations were absent on the upper and diminished on the lower limbs in a glove and stocking pattern. Slit skin smear (SSS) examination from earlobes and dorsal surface of fingers were negative for acid-fast bacilli. In Leprosy, eyes can be involved in three ways; (i) as a complication of involvement of facial and trigeminal nerve; (ii) by invasion of the eyeball by Mycobacterium leprae in lepromatous leprosy; (iii) and by participation in the generalized allergic reaction, known as the reactive phase . Granulomatous infiltration of branches of facial nerve which supplies the frontalis and orbicularis oculi leads to frontalis weakness and lagophthalmos. Absence of blinking and lagophthalmos predisposes the eye to injuries, foreign bodies and constant exposure of cornea to heat, dust and wind leading to exposure keratitis. Secondary infection of exposure keratitis leads to development of corneal ulcer and perforation. In our patient, loss of corneal sensation and lagophthalmos resulted in development of corneal ulcer, which eventually perforated. Organization of exudates and laying down of fibrous tissue healed the defect, but at the expense of loss of transparency, reduced strength and reduced vision. Gradually the weak anterior surface of the eye-lined by newly formed epithelium protruded outward leading to an anterior staphyloma . A leprosy patient, with intact visual acuity, could manage his daily routine. Leprosy patients with ocular involvement is a double handicap-ocular involvement resulting in loss of visual acuity and blindness, and deformities of limbs resulting in additional disabilities. Appropriate care of affected body part (especially the areas with reduced sensations) is hampered due to poor vision and visual sensory inputs, and deformed extremities in turn do not permit proper eye care. The social stigma, old age of patients, and poor quality of eye care services further add to the woes of the patient. This article presents one such case of leprosy, with resorption of digits and lack of proper eye care leading to bilateral blindness. Other clinical presentations of ocular leprosy are mentioned in . Inflammation of tissues in the superior orbital fissure (CN III, CN IV and V1 division of CN V and CN VI) and in the optic nerve canal results in Orbital apex syndrome. In leprosy, risk of ocular complications increases with the duration of the disease. Visual impairment resulting from leprosy is preventable if diagnosed at early stage. All leprosy patients (including the cured ones) should undergo a baseline ophthalmological examination, and must be made aware that prompt ophthalmological review is required for any new signs or symptoms of the eye to prevent avoidable blindness due to life-long risk of sight-threatening ocular complications. Patients deemed to be at a higher risk warrant regular follow-up. |
Neurophysiology of movement inhibition during full body reaching | b9e12efe-0ecf-4ea4-b6e6-ab27700bb377 | 9481520 | Physiology[mh] | Much of our understanding of response inhibition in humans is derived from go/no-go experiments. First published in 1969, go/no-go paradigms require voluntary movement to a go stimulus (i.e. push a button when a light turns green) and inhibition of movement to a no-go stimulus (i.e. do nothing when a light turns red) , . Narrow response windows promote faster movements, thus priming the motor system for action, and making it harder to correctly inhibit movement. Conventional studies have been limited to single limb movements (i.e., finger movement) and drawn conclusions on response inhibition based on the presence or absence of an overt behavioral response (i.e., button press) – . The goal in the current study was to extend our understanding of movement inhibition in humans by implementing a go/no-go paradigm within a full body reaching task while measuring activation of the motor system at the behavioral and neurophysiological level. Go-no/go paradigms have been widely used to investigate response inhibition in healthy adults , , adolescents , , as well as individuals with a range of disorders – . Using go/no-go button press tasks, these studies have generally found that response inhibition is compromised in varying disorders, and that one’s ability to inhibit the motor system is a fundamental component of a healthy central nervous system. For example, individuals with higher levels of impulsivity and lower IQ incorrectly respond to no-go stimuli (termed errors of commission) more often than those with lower levels of impulsivity and higher IQ . However, an important caveat to using overt motor responses is that the absence of an error of commission does not necessarily translate to complete inhibition of the motor system at the neurophysiological level. Indeed, not only are there differing levels of sensitivity in response devices (how much force does one need to apply for a response to be registered) but brain and muscle can activate without resulting in an overt behavioral response . To our knowledge, no one has implemented a go/no-go paradigm within a full body reaching movement and used neurophysiological measures, in combination with behavioral measures to quantify errors of commission. In the current study, we implemented our full-body reaching task within a novel go/no-go experimental paradigm. We developed this task in virtual reality (VR) space and combined it with electromyography (EMG) measurements. Virtual targets were standardized such that participants could in theory reach the target during go trials by flexing their lumbar spine 15° (high target) or 60° (low target) with the shoulder flexed 90° and the elbow fully extended. In addition to measuring movement of the hand controller, which tracked hand position, (i.e., an overt behavioral response) we also measured muscle activity from the tibialis anterior, deltoid and multifidus muscles. We first hypothesized that during no-go trials, errors of commission would be greater when calculated using neurophysiology measures (muscle activity measured by EMGs) as compared to an overt behavioral measure (hand position). Second, we hypothesized that onset times would be faster using neurophysiology measures as compared to behavioral measures, and that higher errors of commission during no-go trials would be associated with faster onset times during go trials. Finally, given the further distance of the low target, we expected errors of commission to be greater and onset times to be faster for low as compared to high targets. No-go errors of commission analysis across EMGs and hand position We hypothesized that during no-go trials errors of commission would be greater when calculated using neurophysiology measures as compared to an overt behavioral response. Results from our two-way, repeated-measure ANOVA found a significant effect of height (p = 0.045) and a significant effect of muscle/hand position (p = 0.008). No significant interactions were found (height × sex = 0.397, muscle/hand position × sex = 0.222, height × muscle/hand position = 0.443, height × muscle/hand position × sex = 0.285). Post-hoc analysis for the main effect of height indicated that more errors were committed at the low than high target (high target mean and standard deviation: 4.9 ± 0.44, low target mean and standard deviation: 5.9 ± 0.51). Data for errors of commission on the main effect of muscle/hand position are shown in Fig. A and means and standard errors are presented in Table . For ease of interpretation, error count is collapsed across target height. Results from FDR corrected post-hoc analysis can be found in Fig. B. Grey color indicates a p-value > 0.05, while pink color indicates a p-value < 0.01. We found that errors of commission in right tibialis anterior were highest and significantly greater than the errors of commission in left tibialis anterior, right multifidus, left multifidus, deltoid, and hand position. Errors of commission in the left tibialis anterior were not significantly greater than right multifidus and left multifidus but were greater than the deltoid and hand position. There were more errors of commission committed in the right multifidus than the deltoid and hand position and in the left multifidus compared to hand position, but the two multifidi muscles were not statistically different from one another. Notably, errors of commission in hand position were significantly lower than all other measures, suggesting attenuated sensitivity to errors in behavioral as compared to neurophysiological measures. Onset analysis across EMGs and hand controller We hypothesized that onset times would be faster using neurophysiology measures as compared to behavioral measures. The results from our two-way, repeated-measure ANOVA found an effect of muscle/hand position (p < 0.001), but no effect of height (p = 0.204) or interaction (height × sex = 0.204, muscle/hand position × sex = 0.272, height × muscle/hand position = 0.896, height × muscle/hand position × sex = 0.556). See Table for onset means and standard errors and Fig. D for results from the FDR corrected post-hoc analysis. Onset times for each muscle/hand position are shown in Fig. C. Onset times were fastest in right tibialis anterior and slowest in the left multifidus. Notably, all onset times were statistically different from each other except between the left multifidus and hand position. Correlation between errors of commission and onset across all measures Figure shows the relationship between errors of commission during no-go trials and onset times during go trials across all measures. There was a significant negative correlation between the two variables, r = − 0.221, p < 0.001 with higher errors of commission associated with faster onset times. All correlations for each muscle/hand position failed to reach significance (all p’s > 0.05). We hypothesized that during no-go trials errors of commission would be greater when calculated using neurophysiology measures as compared to an overt behavioral response. Results from our two-way, repeated-measure ANOVA found a significant effect of height (p = 0.045) and a significant effect of muscle/hand position (p = 0.008). No significant interactions were found (height × sex = 0.397, muscle/hand position × sex = 0.222, height × muscle/hand position = 0.443, height × muscle/hand position × sex = 0.285). Post-hoc analysis for the main effect of height indicated that more errors were committed at the low than high target (high target mean and standard deviation: 4.9 ± 0.44, low target mean and standard deviation: 5.9 ± 0.51). Data for errors of commission on the main effect of muscle/hand position are shown in Fig. A and means and standard errors are presented in Table . For ease of interpretation, error count is collapsed across target height. Results from FDR corrected post-hoc analysis can be found in Fig. B. Grey color indicates a p-value > 0.05, while pink color indicates a p-value < 0.01. We found that errors of commission in right tibialis anterior were highest and significantly greater than the errors of commission in left tibialis anterior, right multifidus, left multifidus, deltoid, and hand position. Errors of commission in the left tibialis anterior were not significantly greater than right multifidus and left multifidus but were greater than the deltoid and hand position. There were more errors of commission committed in the right multifidus than the deltoid and hand position and in the left multifidus compared to hand position, but the two multifidi muscles were not statistically different from one another. Notably, errors of commission in hand position were significantly lower than all other measures, suggesting attenuated sensitivity to errors in behavioral as compared to neurophysiological measures. We hypothesized that onset times would be faster using neurophysiology measures as compared to behavioral measures. The results from our two-way, repeated-measure ANOVA found an effect of muscle/hand position (p < 0.001), but no effect of height (p = 0.204) or interaction (height × sex = 0.204, muscle/hand position × sex = 0.272, height × muscle/hand position = 0.896, height × muscle/hand position × sex = 0.556). See Table for onset means and standard errors and Fig. D for results from the FDR corrected post-hoc analysis. Onset times for each muscle/hand position are shown in Fig. C. Onset times were fastest in right tibialis anterior and slowest in the left multifidus. Notably, all onset times were statistically different from each other except between the left multifidus and hand position. Figure shows the relationship between errors of commission during no-go trials and onset times during go trials across all measures. There was a significant negative correlation between the two variables, r = − 0.221, p < 0.001 with higher errors of commission associated with faster onset times. All correlations for each muscle/hand position failed to reach significance (all p’s > 0.05). The goal of the current study was to determine whether errors of commission derived from behavioral and neurophysiology measures differ during a go/no-go full body reaching task. Our study is the first to successfully combine a go/no-go paradigm with a standardized full body reaching task in virtual reality space. Our observations show that when directly comparing neurophysiological and behavioral measures, neurophysiological measures were associated with a greater number of errors during no-go trials and faster onset times during go trials. Further analyses revealed a negative correlation between errors and onset times when all the measures were combined together (muscles + hand placement), such that the muscles that activated the fastest during go trials also had the greatest number of errors during no-go trials. Errors and onset times also followed a distal to proximal cascade consistent with evidence of anticipatory postural adjustments (APAs) in reaching tasks . Previous studies have used EMG to examine the neural mechanisms underlying movement inhibition, during both go/no-go and stop signal tasks , , . Although a partial response EMG can be evident during successful no-go trials and successful stop signal trials, the mechanisms underlying each process is different , . No-go trials require action restraint, where stop trials require action cancellation. Here we focus on action restraint. Whereas previous studies have used a button press as the overt behavioral response and EMG recordings from a single muscle, we implemented a full body reaching task, recorded EMG from multiple muscles, and compared number of errors of commission across behavioral and neurophysiological measures. Compared to an overt behavioral response, errors of commission were higher when using muscle activity as a readout of motor system activation. Our behavioral measure of hand position had significantly less errors compared to all neurophysiology measures. Errors of commission for overt behavioral responses on no-go trials was 4.38% in the current study, which is consistent with values of ~ 5% reported in studies using index finger key press tasks , . Error rates rose to 8–16% when using EMG measures in our study, in line with findings by Raud et al. who reported EMG activity in the abductor pollicis brevis during 14% of successful no-go trials in which there was no overt button press. Together these findings suggest that despite different tasks (full body reaching vs button press), and recordings from different effectors (finger vs torso vs limbs), responses during no-go trials are remarkably similar, which points to a relatively stable movement inhibition mechanism in humans. Although recent evidence points to clear differences between the spatial and temporal profiles of go-no/go tasks and stop signal tasks , a fundamental role of prefrontal cortex in movement inhibition is well established , – . Lesions in prefrontal cortex are associated with worst performance on tasks that require response inhibition, as compared to tasks that do not , . Removing a bilateral meningioma tumor in medial prefrontal cortex led errors of commission to drop from 75% to normal levels , providing causal evidence for the association between medial prefrontal cortex function and movement inhibition during a go-no/go task. Other support comes from in-vivo neuroimaging studies – . Konishi et al. found consistent brain activation in right inferior prefrontal cortex irrespective of which hand completed the go/no-go task suggesting effector independence in the prefrontal inhibitory mechanism. This is important because it suggests that the effector being used, and its corresponding somatotopic organization in the human motor system, does not appear to influence how easy or difficult it is to restrain action. Errors of commission were higher for more distal as compared to proximal muscles. For instance, the largest number of errors were evident in the right and left tibialis anterior, and the fewest in the deltoid muscle. APAs during reaching movements is one explanation for this pattern in the data , – . APAs reflect a robust and consistent adjustment in posture that is made in anticipation of the arm moving away from the center of mass, and characterized by activity in distal musculature like the tibialis and gastrocnemius muscles prior to activation of muscle in the torso and arm , – . Other research has investigated how the neural control of APAs is effected by aging – , pain – , Parkinson’s Disease , , and stroke , . For example, APAs during arm movements in the elderly are characterized by slower onset latencies in the tibialis anterior and gastrocnemius and by a hip strategy that promotes stability by recruiting muscle activity in a more proximal to distal manner . We found the opposite pattern of activity in the current study in young healthy adults, with errors of commission occurring more often in distal as compared to proximal muscles. Future research utilizing EMGs as a neurophysiological measure of inhibition can determine how age and disease may modulate this pattern during full body reaching. Higher errors of commission during no-go trials were associated with faster onsets during go-trials. Our overt behavioral measure of hand position occurred with an average onset time of 487 ms and this onset was greater than the onset of activity in all of the muscles except the left multifidus. Individual correlations at each muscle/hand position failed to reach significance. This means that the association between onset time and CEs only emerges when combining data across multiple muscles. This relationship suggests future research implementing a neurophysiological method of measuring response inhibition should consider the typical cascade of onset of muscle activity associated with their task, as our results suggest that muscles with the fastest onset are the same muscles that will commit the most errors of commission during response inhibition. Together, our results suggest that the lack of an overt behavioral response is not sufficient to assume that the motor system is completely inhibited. Conclusions on response inhibition based on go/no-go paradigms that only measure overt movements should be made with caution, given that muscle can be active despite no evidence of an overt behavioral response. Participants The University of Florida Institutional Review Board approved this study and research adhered to relevant guidelines and regulations. Informed consent was collected from all subjects prior to data collection. All data collection occurred at the University of Florida, and we recruited 51 participants for this study. Exclusion criteria for all participants included self-reported history of Parkinson’s disease, Alzheimer’s disease, multiple sclerosis, amyotrophic lateral sclerosis, TIA, stroke, seizures, epilepsy, cancer, heart disease, pregnant. The mean age was 19.5 years (± 0.72), 14 were male and 38 were female. Equipment A Delsys Trigno Wireless system (Delsys Inc., Boston MA, USA) was used to collect surface EMG from seven muscles. One electrode was placed over the right deltoid (Delt) to quantify movement of the right arm during reaching. We placed EMGs over the right and left tibialis anterior (RTA, LTA) given their role in anticipatory postural adjustments during forward displacement of the torso . To represent the erector spinae group , we placed electrodes over the right and left multifidus (RMulti and LMulti). We did not collect data from the biceps and triceps brachii as we were not anticipating strong involvement of elbow flexion and extension due to participants starting each trial standing with upright posture, and their arm relaxed by their side with their elbow fully extended. EMG sampling rate was 1000 Hz. An HTC VIVE virtual reality system (HTC Corp., New Taipei City, Taiwan) was used to project the game space. The game was programed using the Unity Virtual Reality platform (Unity Technologies, Copenhagen, Denmark). Hardware included the VIVE headset, two wireless hand controllers to track position of the hand, and two base stations for continuous localization of the headset, controllers, and trackers in space. The virtual reality game space was projected to the participant’s headset as well as a 30″ computer monitor (Dell UltraSharp U3011, Dell Co, Round Rock, TX) to ensure that the researcher could see what the participant was seeing within the VR environment. A Motion Monitor (Innovative Sports Training, Inc., Chicago, IL) system and an Arduino microcontroller (Arduino LLC., Italy) was used to synchronize data from the VR equipment and the EMG electrodes. Accuracy of position and orientation of the VIVE trackers/controllers is within 0.68 ± 0.32 cm translationally, and 1.64 ± 0.18° rotationally . Sampling rate of VIVE controllers was 103 Hz. Calibration of target height and virtual reality avatar Target locations were calculated from anthropometric measurements taken from each participant using one VR hand controller that marked the following locations on the body: the ground directly between the participant’s medial malleoli, top of the head, C-7, L-5, right greater trochanter, right acromioclavicular joint, right elbow, and tip of the right middle finger. From these coordinate data, target location in the VR space was normalized to each participant’s anthropometric measurements (i.e., hip height, trunk length, and arm length; see for more details). Thus, the virtual targets were located such that the participants could reach the target, in theory, by flexing their hips 15 degrees (high target) and 60 degrees (low target), with the shoulder flexed 90 degrees and the elbow fully extended. The coordinate data were also used to scale the participant’s avatar in VR space to match real-world coordinates to ensure fidelity of visual feedback in the VR space. Virtual reality reaching task Participants started each trial with upright posture and each hand holding a VR controller relaxed at their sides. Figure shows the timeline, visual display, and cartoon of body position during a single virtual reality reaching trial. Each trial began with a rest period. Next, participants were told that a blue cube would appear in front of them and that this cube represents the target they need to reach forward and touch. After 2000 ms, the same cube turned green (go trial) or red (no-go trial). The participant was told to stand still if the cube turned red. If the cube turned green, the participant reached out with their right hand and touched the cube. The cube remained green or red for 750 ms and participants were instructed to reach and touch the green cube before it disappeared. Once the target disappeared participants had 8.5 s to return to the starting position and get ready for the next trial. Participants were told to only use their right hand when reaching for a green cube. Therefore, data analysis was only performed on the right-hand controller and EMG was placed on the right deltoid and not the left. Participants were not given instructions on how to move to avoid biasing natural movements. Participants were informed to move “in any manner that allowed for reaching the target as long as their feet remained stationary”. Experimental protocol EMG electrodes were first attached to the participant. Placement sites were determined through palpation and then cleaned with alcohol wipes. EMGs were taped in place to prevent shifting. Next, participants were asked to stand facing an empty wall in the data collection space while the VR headset was placed on top of their head and over their eyes. Once the VR headset was secure, participants received instruction about the go/no-go task. One block consisted of 26 reaching trials and participants completed 4 blocks in total (i.e., 104 trials in total). The target was placed at a height that in principle could elicit 15° of lumbar flexion (high target) for two blocks and 60° of lumbar flexion (low target) for two blocks. Participants cycled through the four blocks beginning at either the high or the low target height, followed by the opposite target height (H–L–H–L or L–H–L–H). The starting target height was counterbalanced across participants. Additionally, the VR task was programmed to deliver half of the trials in each block as a go stimulus and half of the trials in each block as a no-go stimulus. The order of go and no-go trials within a block was random. A total of 13 go trials and 13 no-go trials were provided in each 26-trial block. A practice session of 10 trials (5 go, 5 no-go) at the high target and 10 trials at the low target were provided to each participant before data collection began. Data analysis A customized Matlab program was used to analyze position of the right hand controller data. First, data was low pass filtered at 20 Hz. Second, a “distance window” was set to locate the max distance the hand traveled. This window started 1 ms after the target changed color and ended 1 ms before the end of the trial. Third, a “baseline window’ was set for locating the mean and standard deviation of the signal during the baseline, which was used to normalize overall signal. The window started 90 ms after the blue target appeared and ended 20 ms later. Once the signal was normalized using the baseline mean, the max distance the hand traveled and onset of the movement of the hand was calculated. The max distance was located within the distance window. Onset was calculated as the point when the signal increased by 5% of the max distance traveled. Trials with anticipatory activity were not included in the analysis. A threshold was set to determine if the hand moved during no-go trials. Based on position and orientation accuracy of VIVE trackers/controllers our threshold for a commission error was 0.68 cm + three times the standard deviation 0.32 cm. If the max distance the hand traveled during no-go trials passed this threshold the trial was marked as an error of commission, meaning the hand moved in response to a no-go stimulus. A summary statistic was calculated by summing the number of errors of commission that occurred on no-go trials for each individual and for each target height. A customized Matlab program was used to analyze EMG data. First, data was high-pass filtered at 2 Hz (Butterworth 4th order dual pass), rectified, and low-passed filtered at 6 Hz (Butterworth 4th order dual pass). These filter settings were used to ensure consistent identification of onset of bursting of EMG activity , . Second, a window was set for locating peak amplitude and onset of EMG activity. The window started after the target changed color from blue to red or green and ended with termination of the trial. This ensured that anticipatory muscle activity was not included in determining peak amplitude and onset. Third, onset and peak EMG amplitude was calculated for all go trials. Peak amplitude was calculated as the maximum value that occurred between target color change and termination of the trial. The maximum value was normalized to the average EMG activity during the baseline window. The baseline window consisted of 1000 ms starting when the target turned blue. We calculated onset as the point when EMG activity increased by ≥ 5% of the normalized peak amplitude. Average peak amplitude was calculated for each muscle at both the high and low target during the go trials. We used a threshold to quantify the extent of muscle activity produced during no-go trials. The threshold was calculated as the average baseline EMG activity plus 3 × the standard deviation. Each threshold was calculated individually for each participant and for each muscle. If the EMG activity during a no-go trial passed the threshold, the trial was labeled as an error of commission. Finally, all trials were plotted for visual inspection to ensure that our automated analysis approach was correctly performed. Figure shows three example EMG time series from three different trials. The black lines represent the filtered EMG signal. The blue, green, and red bars below the time series represent the color of the target across time. The grey shading behind the EMG time series demarcate the amplitude of the threshold. Figure a shows a go trial that was labelled as correct. Figure b shows a no-go trial that was also labelled as a correct response because the EMG signal did not rise above the threshold. Figure c shows EMG activity during a no-go trial which was identified as a commission error because the peak amplitude passed the set threshold. The EMG response in Fig. a,c was therefore similar, but one was labelled as an error and one was labeled as a correct response based on the trial type (i.e., go vs no-go). The summary statistic was calculated by summing the number of errors of commission that occurred on no-go trials for each individual, for each muscle, and for each target height. Statistical analysis To determine whether errors of commission were greater and onset times were faster for neurophysiological as compared to behavioral measures we ran separate two-way, repeated-measure ANOVAs with height (high target, low target) and muscle/hand position (5 EMGs, 1 hand controller) as independent variables, with sex as a covariate. Significant main effects and interactions were followed up with separate, repeated measure ANOVAs at the high and low target and t-tests as appropriate. All follow-up tests were corrected for multiple comparisons using FDR correction . To explore the relationship between errors of commission during no-go trials and onset times during go trials we found the average (across target heights) of each individual’s error of commission and onset. This data was used to produce a Pearson Product Correlation at each muscle/hand position as well as a Person Product Correlation across all the measures. All analyses were performed in IBM SPSS Statistics for Window, version 25 (IBM Corp., Armonk, N.Y., USA). The University of Florida Institutional Review Board approved this study and research adhered to relevant guidelines and regulations. Informed consent was collected from all subjects prior to data collection. All data collection occurred at the University of Florida, and we recruited 51 participants for this study. Exclusion criteria for all participants included self-reported history of Parkinson’s disease, Alzheimer’s disease, multiple sclerosis, amyotrophic lateral sclerosis, TIA, stroke, seizures, epilepsy, cancer, heart disease, pregnant. The mean age was 19.5 years (± 0.72), 14 were male and 38 were female. A Delsys Trigno Wireless system (Delsys Inc., Boston MA, USA) was used to collect surface EMG from seven muscles. One electrode was placed over the right deltoid (Delt) to quantify movement of the right arm during reaching. We placed EMGs over the right and left tibialis anterior (RTA, LTA) given their role in anticipatory postural adjustments during forward displacement of the torso . To represent the erector spinae group , we placed electrodes over the right and left multifidus (RMulti and LMulti). We did not collect data from the biceps and triceps brachii as we were not anticipating strong involvement of elbow flexion and extension due to participants starting each trial standing with upright posture, and their arm relaxed by their side with their elbow fully extended. EMG sampling rate was 1000 Hz. An HTC VIVE virtual reality system (HTC Corp., New Taipei City, Taiwan) was used to project the game space. The game was programed using the Unity Virtual Reality platform (Unity Technologies, Copenhagen, Denmark). Hardware included the VIVE headset, two wireless hand controllers to track position of the hand, and two base stations for continuous localization of the headset, controllers, and trackers in space. The virtual reality game space was projected to the participant’s headset as well as a 30″ computer monitor (Dell UltraSharp U3011, Dell Co, Round Rock, TX) to ensure that the researcher could see what the participant was seeing within the VR environment. A Motion Monitor (Innovative Sports Training, Inc., Chicago, IL) system and an Arduino microcontroller (Arduino LLC., Italy) was used to synchronize data from the VR equipment and the EMG electrodes. Accuracy of position and orientation of the VIVE trackers/controllers is within 0.68 ± 0.32 cm translationally, and 1.64 ± 0.18° rotationally . Sampling rate of VIVE controllers was 103 Hz. Target locations were calculated from anthropometric measurements taken from each participant using one VR hand controller that marked the following locations on the body: the ground directly between the participant’s medial malleoli, top of the head, C-7, L-5, right greater trochanter, right acromioclavicular joint, right elbow, and tip of the right middle finger. From these coordinate data, target location in the VR space was normalized to each participant’s anthropometric measurements (i.e., hip height, trunk length, and arm length; see for more details). Thus, the virtual targets were located such that the participants could reach the target, in theory, by flexing their hips 15 degrees (high target) and 60 degrees (low target), with the shoulder flexed 90 degrees and the elbow fully extended. The coordinate data were also used to scale the participant’s avatar in VR space to match real-world coordinates to ensure fidelity of visual feedback in the VR space. Participants started each trial with upright posture and each hand holding a VR controller relaxed at their sides. Figure shows the timeline, visual display, and cartoon of body position during a single virtual reality reaching trial. Each trial began with a rest period. Next, participants were told that a blue cube would appear in front of them and that this cube represents the target they need to reach forward and touch. After 2000 ms, the same cube turned green (go trial) or red (no-go trial). The participant was told to stand still if the cube turned red. If the cube turned green, the participant reached out with their right hand and touched the cube. The cube remained green or red for 750 ms and participants were instructed to reach and touch the green cube before it disappeared. Once the target disappeared participants had 8.5 s to return to the starting position and get ready for the next trial. Participants were told to only use their right hand when reaching for a green cube. Therefore, data analysis was only performed on the right-hand controller and EMG was placed on the right deltoid and not the left. Participants were not given instructions on how to move to avoid biasing natural movements. Participants were informed to move “in any manner that allowed for reaching the target as long as their feet remained stationary”. EMG electrodes were first attached to the participant. Placement sites were determined through palpation and then cleaned with alcohol wipes. EMGs were taped in place to prevent shifting. Next, participants were asked to stand facing an empty wall in the data collection space while the VR headset was placed on top of their head and over their eyes. Once the VR headset was secure, participants received instruction about the go/no-go task. One block consisted of 26 reaching trials and participants completed 4 blocks in total (i.e., 104 trials in total). The target was placed at a height that in principle could elicit 15° of lumbar flexion (high target) for two blocks and 60° of lumbar flexion (low target) for two blocks. Participants cycled through the four blocks beginning at either the high or the low target height, followed by the opposite target height (H–L–H–L or L–H–L–H). The starting target height was counterbalanced across participants. Additionally, the VR task was programmed to deliver half of the trials in each block as a go stimulus and half of the trials in each block as a no-go stimulus. The order of go and no-go trials within a block was random. A total of 13 go trials and 13 no-go trials were provided in each 26-trial block. A practice session of 10 trials (5 go, 5 no-go) at the high target and 10 trials at the low target were provided to each participant before data collection began. A customized Matlab program was used to analyze position of the right hand controller data. First, data was low pass filtered at 20 Hz. Second, a “distance window” was set to locate the max distance the hand traveled. This window started 1 ms after the target changed color and ended 1 ms before the end of the trial. Third, a “baseline window’ was set for locating the mean and standard deviation of the signal during the baseline, which was used to normalize overall signal. The window started 90 ms after the blue target appeared and ended 20 ms later. Once the signal was normalized using the baseline mean, the max distance the hand traveled and onset of the movement of the hand was calculated. The max distance was located within the distance window. Onset was calculated as the point when the signal increased by 5% of the max distance traveled. Trials with anticipatory activity were not included in the analysis. A threshold was set to determine if the hand moved during no-go trials. Based on position and orientation accuracy of VIVE trackers/controllers our threshold for a commission error was 0.68 cm + three times the standard deviation 0.32 cm. If the max distance the hand traveled during no-go trials passed this threshold the trial was marked as an error of commission, meaning the hand moved in response to a no-go stimulus. A summary statistic was calculated by summing the number of errors of commission that occurred on no-go trials for each individual and for each target height. A customized Matlab program was used to analyze EMG data. First, data was high-pass filtered at 2 Hz (Butterworth 4th order dual pass), rectified, and low-passed filtered at 6 Hz (Butterworth 4th order dual pass). These filter settings were used to ensure consistent identification of onset of bursting of EMG activity , . Second, a window was set for locating peak amplitude and onset of EMG activity. The window started after the target changed color from blue to red or green and ended with termination of the trial. This ensured that anticipatory muscle activity was not included in determining peak amplitude and onset. Third, onset and peak EMG amplitude was calculated for all go trials. Peak amplitude was calculated as the maximum value that occurred between target color change and termination of the trial. The maximum value was normalized to the average EMG activity during the baseline window. The baseline window consisted of 1000 ms starting when the target turned blue. We calculated onset as the point when EMG activity increased by ≥ 5% of the normalized peak amplitude. Average peak amplitude was calculated for each muscle at both the high and low target during the go trials. We used a threshold to quantify the extent of muscle activity produced during no-go trials. The threshold was calculated as the average baseline EMG activity plus 3 × the standard deviation. Each threshold was calculated individually for each participant and for each muscle. If the EMG activity during a no-go trial passed the threshold, the trial was labeled as an error of commission. Finally, all trials were plotted for visual inspection to ensure that our automated analysis approach was correctly performed. Figure shows three example EMG time series from three different trials. The black lines represent the filtered EMG signal. The blue, green, and red bars below the time series represent the color of the target across time. The grey shading behind the EMG time series demarcate the amplitude of the threshold. Figure a shows a go trial that was labelled as correct. Figure b shows a no-go trial that was also labelled as a correct response because the EMG signal did not rise above the threshold. Figure c shows EMG activity during a no-go trial which was identified as a commission error because the peak amplitude passed the set threshold. The EMG response in Fig. a,c was therefore similar, but one was labelled as an error and one was labeled as a correct response based on the trial type (i.e., go vs no-go). The summary statistic was calculated by summing the number of errors of commission that occurred on no-go trials for each individual, for each muscle, and for each target height. To determine whether errors of commission were greater and onset times were faster for neurophysiological as compared to behavioral measures we ran separate two-way, repeated-measure ANOVAs with height (high target, low target) and muscle/hand position (5 EMGs, 1 hand controller) as independent variables, with sex as a covariate. Significant main effects and interactions were followed up with separate, repeated measure ANOVAs at the high and low target and t-tests as appropriate. All follow-up tests were corrected for multiple comparisons using FDR correction . To explore the relationship between errors of commission during no-go trials and onset times during go trials we found the average (across target heights) of each individual’s error of commission and onset. This data was used to produce a Pearson Product Correlation at each muscle/hand position as well as a Person Product Correlation across all the measures. All analyses were performed in IBM SPSS Statistics for Window, version 25 (IBM Corp., Armonk, N.Y., USA). |
Medicinal Chemistry Strategies for the Modification of Bioactive Natural Products | 26ba8f88-a70c-407f-9afb-6b68055cf6f8 | 10856770 | Pharmacology[mh] | Natural bioactive compounds are structurally unique metabolites produced by a variety of organisms, including animals, plants, and microorganisms, that possess exceptional physiological activities. These compounds are valuable resources for the development of novel drugs, particularly in the ongoing battle against infectious diseases and cancer . However, most natural bioactive compounds exhibit certain limitations in terms of their biological properties. These limitations include low activity, limited specificity, significant toxicity, and unfavorable pharmacokinetic profiles, which hinder their direct utilization as pharmaceutical agents. Therefore, it is imperative to optimize and modify their molecular structures to enhance their biological properties and achieve safety, efficacy, and control in drug applications. Natural active compounds are highly regarded as valuable resources for discovering structurally innovative lead compounds. By modifying and optimizing the molecular structure, diverse libraries of compounds can be generated, harnessing the potential of natural product resources and bolstering the prospects of new drug development. The field of drug screening and development based on natural products has significantly advanced with the advent of bioinformatics technologies, including artificial intelligence (AI) and advanced computing. Techniques such as target prediction, metabolite profiling of natural products, and investigations into the dynamics and thermodynamics of pharmacophores have greatly facilitated the identification of lead compounds from natural sources. These approaches enable the exploration of structural transformations, commencing from the intact natural product, progressing to its fragments, and ultimately culminating in structural optimization . Compared to chemically synthesized drugs, natural products are characterized by structural diversity and complexity, more chiral centers, fewer nitrogen and halogen atoms or aromatic rings, and other characteristics that will be discussed below. 2.1. Diversity and Complexity of Natural Product Structures Natural products exhibit a remarkable diversity and complexity in their structures. Take artemisinin ( 1 , ), for example, which contains a unique combination of a peroxide bond, lactone, and a bridged tricyclic system. These structural features not only preserve chemical reactivity but also ensure molecular stability . Similarly, paclitaxel ( 2 , ) possesses a fused tetracyclic framework with a 6-8-6-4 arrangement of functional groups, which contributes to its potent inhibition of microtubule proteins. However, the complex structures of natural products pose challenges in their chemical synthesis . In some cases, natural product structures may contain “redundant” atoms that do not participate in target binding. This presence of redundant atoms can have negative implications on the physicochemical, pharmacokinetic, and pharmaceutical properties of the compounds. Therefore, it becomes crucial to remove these redundant atoms and fragments during the process of structural modification in order to enhance the efficiency of ligand binding . 2.2. High sp3 Carbon Content and Few Aromatic Rings Natural product structures often exhibit a high proportion of sp3-hybridized carbon atoms, which are commonly found in aliphatic chains or cyclic compounds. This unique feature imparts flexibility to these structures. For example, the immunosuppressant tacrolimus ( 3 , ) and the antitumor agent epothilone B ( 4 , ) possess large macrocyclic lactone structures that provide them with considerable flexibility. The active component ISP-1 ( 5 , ) in Cordyceps militaris , an immunomodulator, is a flexible linear compound. Nature seems to prefer aliphatic rings over aromatic rings, as only 38% of known natural products contain aromatic systems . Aromatic rings play a crucial role in interacting with drug targets through phenomena such as π-π stacking, hydrogen bonding, and van der Waals forces, thereby influencing pharmacological properties and bioactivity. Additionally, the introduction of various substituents or functional groups on aromatic rings can effectively modulate the properties, selectivity, and solubility of drug molecules . 2.3. Low Nitrogen and Halogen Content Most natural products primarily consist of carbon, hydrogen, and oxygen, with a relatively low abundance of nitrogen atoms. When nitrogen atoms are present, their quantity is often limited. Nitrogen atoms can display nucleophilic characteristics and can exist in trivalent or pentavalent states. They can appear as basic salts or neutral amides, participate in ring formation, contribute to aromatization and fusion reactions, act as terminal or linking groups, and function as both hydrogen bond donors and acceptors. These properties enhance the binding efficiency between small molecules and ligands and can also have an impact on the drug’s solubility and bioavailability . With the exception of bromine atoms commonly found in marine organisms, natural products generally contain a low amount of halogens . Approximately 20% of small molecule anticancer lead compounds incorporate iodine, bromine, or chlorine. Halogens provide enhancements in lipophilicity and membrane permeability, and the electronegativity of halogens can augment the biological activity of the central molecule. For example, the inclusion of potent electron-withdrawing groups like fluorine can improve binding affinity, metabolic stability, physical properties, and selective activity . 2.4. Chirality and Stereochemistry The generation of natural products involves a series of enzymatic reactions, wherein the stereospecificity of these reactions determines the stereochemical attributes of the resultant products, including chiral centers, axes, and cis-trans isomerism . For instance, morphine ( 6 , ) consists of 21 non-hydrogen atoms, forms five fused rings, and possesses five chiral centers (red dots). Lovastatin ( 7 , ), on the other hand, comprises 28 non-hydrogen atoms, has eight chiral centers (red dots), and two conjugated trans double bonds. Dealing with chirality and stereochemistry can be as challenging as handling complex structures. Consequently, during chemical synthesis, it is advisable to minimize unnecessary chiral elements while maintaining activity and pharmacokinetic properties . Natural products exhibit a remarkable diversity and complexity in their structures. Take artemisinin ( 1 , ), for example, which contains a unique combination of a peroxide bond, lactone, and a bridged tricyclic system. These structural features not only preserve chemical reactivity but also ensure molecular stability . Similarly, paclitaxel ( 2 , ) possesses a fused tetracyclic framework with a 6-8-6-4 arrangement of functional groups, which contributes to its potent inhibition of microtubule proteins. However, the complex structures of natural products pose challenges in their chemical synthesis . In some cases, natural product structures may contain “redundant” atoms that do not participate in target binding. This presence of redundant atoms can have negative implications on the physicochemical, pharmacokinetic, and pharmaceutical properties of the compounds. Therefore, it becomes crucial to remove these redundant atoms and fragments during the process of structural modification in order to enhance the efficiency of ligand binding . Natural product structures often exhibit a high proportion of sp3-hybridized carbon atoms, which are commonly found in aliphatic chains or cyclic compounds. This unique feature imparts flexibility to these structures. For example, the immunosuppressant tacrolimus ( 3 , ) and the antitumor agent epothilone B ( 4 , ) possess large macrocyclic lactone structures that provide them with considerable flexibility. The active component ISP-1 ( 5 , ) in Cordyceps militaris , an immunomodulator, is a flexible linear compound. Nature seems to prefer aliphatic rings over aromatic rings, as only 38% of known natural products contain aromatic systems . Aromatic rings play a crucial role in interacting with drug targets through phenomena such as π-π stacking, hydrogen bonding, and van der Waals forces, thereby influencing pharmacological properties and bioactivity. Additionally, the introduction of various substituents or functional groups on aromatic rings can effectively modulate the properties, selectivity, and solubility of drug molecules . Most natural products primarily consist of carbon, hydrogen, and oxygen, with a relatively low abundance of nitrogen atoms. When nitrogen atoms are present, their quantity is often limited. Nitrogen atoms can display nucleophilic characteristics and can exist in trivalent or pentavalent states. They can appear as basic salts or neutral amides, participate in ring formation, contribute to aromatization and fusion reactions, act as terminal or linking groups, and function as both hydrogen bond donors and acceptors. These properties enhance the binding efficiency between small molecules and ligands and can also have an impact on the drug’s solubility and bioavailability . With the exception of bromine atoms commonly found in marine organisms, natural products generally contain a low amount of halogens . Approximately 20% of small molecule anticancer lead compounds incorporate iodine, bromine, or chlorine. Halogens provide enhancements in lipophilicity and membrane permeability, and the electronegativity of halogens can augment the biological activity of the central molecule. For example, the inclusion of potent electron-withdrawing groups like fluorine can improve binding affinity, metabolic stability, physical properties, and selective activity . The generation of natural products involves a series of enzymatic reactions, wherein the stereospecificity of these reactions determines the stereochemical attributes of the resultant products, including chiral centers, axes, and cis-trans isomerism . For instance, morphine ( 6 , ) consists of 21 non-hydrogen atoms, forms five fused rings, and possesses five chiral centers (red dots). Lovastatin ( 7 , ), on the other hand, comprises 28 non-hydrogen atoms, has eight chiral centers (red dots), and two conjugated trans double bonds. Dealing with chirality and stereochemistry can be as challenging as handling complex structures. Consequently, during chemical synthesis, it is advisable to minimize unnecessary chiral elements while maintaining activity and pharmacokinetic properties . The aforementioned characteristics present a range of options for structural modification and transformation, allowing for novel discoveries in research and development. Compared with complex natural drugs, the structure of chemical drug molecules is relatively simple, with a higher content of aromatic rings, especially many nitrogen-containing aromatic heterocycles, resulting in a higher overall nitrogen content in the molecule. This design can form additional hydrogen bond donors and acceptors. The linking mode of cyclic systems is relatively simple, and it is often linked by short linkers such as amide bonds or methylene. Some structural fragments frequently appear in multiple drug molecules and have specific structures and functions, which are closely related to the activity and characteristics of drugs . Privileged fragments refer to small molecular fragments or scaffolds that are highly represented in bioactive compounds and exhibit a wide range of biological activities. In medicinal chemistry, they have several advantages : (1) Activity and affinity: The utilization of privileged fragments in multiple drugs is primarily driven by their demonstrable higher activity and affinity, facilitating specific interactions with biological macromolecules, such as proteins and enzymes. These fragments have exhibited favorable biological activities in several drug compounds, thereby enhancing the probability of uncovering pharmaceutically active compounds; (2) Highly optimized properties: given the recurrent presence of privileged fragments in multiple pharmaceuticals, their structural and functional characteristics have undergone rigorous validation and verification within an extensive repertoire of drug design and optimization endeavors. Consequently, these fragments have garnered considerable attention and refinement, offering distinct advantages in enhancing pharmacokinetic properties, pharmacological profiles, and selectivity of drug candidates ; (3) Flexibility in structural modification: privileged fragments assume a pivotal role as core scaffolds in medicinal agents, providing ample opportunities for subsequent structural modifications to fine-tune their specific characteristics. This inherent flexibility enables chemists to personalize these fragments by manipulating side chains and incorporating additional functional groups, fostering improved optimal drug performances ; (4) Rich structural diversity: although privileged fragments possess predetermined skeletal frameworks, their adeptness for modifications at diverse positional and orientational geometries engenders a wealth of structural diversity . In doing so, privileged fragments facilitate the integration of molecular diversity, effectively expanding the scope of medicinal chemistry investigations to encompass numerous potential targets and pathways. In conclusion, privileged fragments exhibit high activity and affinity, and, through optimization and research, they have demonstrated significant advantages in medicinal chemistry. Due to their controllable structure, privileged fragments offer relative simplicity in synthesis compared to complex natural compounds, allowing for further optimization through structural modifications. This flexibility enables chemists to personalize these fragments by adjusting side chains, introducing additional functional groups, and establishing abundant structure-activity relationships, ultimately resulting in rich structural diversity . This facilitates the attainment of improved drug performance . In medicinal chemistry, “simplifying complexity” is an important remodeling strategy. It involves simplifying the structural core of complex active natural products, partially or completely transforming them into privileged scaffold structures that are easier to synthesize and possess stronger pharmacological effects. Additionally, it can involve deconstructing active natural products into smaller molecular fragments, reassembling and optimizing the entire new scaffold using fragment-based drug design principles, and further modifying it through local structural modifications to enhance its activity and pharmacological efficacy. Over the past 20 years, with the widespread application of high-throughput screening techniques and computer-aided drug design, as well as the increasing availability of protein structure databases and natural product databases, a foundation has been established for structure-based remodeling of active natural products. By integrating techniques such as virtual screening, high-throughput screening, large databases, structural biology information, and computational chemistry, it is possible to employ structure-based drug design strategies such as scaffold hopping and privileged scaffold replacement to optimize certain active natural lead compounds. 4.1. From ISP-1 to Siponimod The ultimate goal of modifying natural products is to develop active compounds into medicines. The transformation process from ISP-1 to siponimod ( 11 , a) serves as an example of natural product modification. The strategies and methods used in this process have significant implications for modifying other natural compounds ( a). ISP-1 is a natural product and it exhibits immunomodulatory effects by acting as a receptor for sphingosine-1-phosphate . I SP-1 possesses a complex structure with three chiral centers, a trans double bond, and both an amino group and a carboxyl group. However, due to its high toxicity and low solubility, I SP-1 cannot be used as a medication without further modifications or adaptations . To simplify the structure, reduce or eliminate chiral centers, improve activity, and enhance pharmacokinetics, compound 8 was selected as a lead compound for structural modifications. Through a series of transformations and optimizations, fingolimod ( 9 , a) was eventually developed. Fingolimod is a symmetrical molecule that has undergone modifications such as removal of the ketone group, trans double bond, and chiral carbons. This molecule lacks chiral and stereoisomeric factors and incorporates a benzene ring into the long chain, which reduces the number of saturated carbons and facilitates synthesis and conformational rigidity. Fingolimod was introduced to the market in 2010 for the treatment of multiple sclerosis . The success of fingolimod can be attributed to the substitution of the alkyl chain with an aromatic ring. Alkyl chains, being flexible in nature, can exist in various conformations, which may not favor the attainment of a “high concentration” of active conformations. Hence, incorporating factors that restrict conformational flexibility in the chain, such as replacing a portion of the saturated carbon chain with a phenyl ring, can offer advantages in terms of potency, pharmacokinetics, safety, and physicochemical properties . Fingolimod functions as a prodrug that is transformed into its active form by sphingosine kinase 2 in the liver after oral absorption. In light of this, a new compound called phosphorylated fingolimod ( 10 , a) was designed . In the subsequent stages of development, the researchers discovered Siponimod , a novel S1P1 receptor agonist that was developed by Novartis. This compound was designed by replacing the flexible lipid chain of fingolimod with a rigid aromatic ring and cyclohexane, and by introducing a trifluoromethyl group onto the aromatic ring. These structural modifications were implemented to enhance the selectivity of siponimod in its interaction with specific receptors, thereby potentially influencing its pharmacological activity . In initial structure-activity studies, the trifluoromethyl group on the benzene ring was found to significantly impact activity, boosting it by over 30 times compared to the unsubstituted hydrogen atom. Despite the structural differences between siponimod and fingolimod , they share similar molecular sizes and pharmacophore features, and their distribution patterns exhibit resemblances ( b). In terms of potency, siponimod exhibits an EC 50 value of 0.4 nM, while its EC 50 for the S1P3 receptor, where its activity is undesirable, is at 5 μM, thus demonstrating a high level of selectivity. Furthermore, studies conducted on monkeys have revealed an oral bioavailability of 71% for siponimod , with a plasma half-life (T 1/2 ) of 19 h . From ISP-1 to Siponimod , the transformation process involves simplifying the complex structure of a natural product with multiple chiral centers and a long flexible chain, which has poor pharmacological properties, into a small molecule chemical drug with a clearly defined drug-like structure. Several transformative strategies can be summarized as follows: (1) In the initial stage of remodeling, the structure is simplified by removing chiral centers as much as possible to reduce the difficulties in chemical synthesis. This allows the synthesis of a controllable drug scaffold for systematic structure-activity relationship studies; (2) For the long flexible liner alkyl chain, the advantage of introducing a pharmaceutically strong aromatic core is that it increases molecular rigidity, and the aromatic ring structure enables further structural modifications; (3) The flexible tail of Fingolimod is further replaced with a benzene ring and a cyclohexane while maintaining its hydrophobic properties. This not only enhances rigidity but also allows for additional modifications by introducing functional groups onto the benzene ring; (4) It can be rationalized using molecular modeling based upon solved S1P1crystal structure to do the optimization (PDB: 3V2Y). Based on the predicted binding mode, it appears that the carboxylic acid headgroup of siponimod is able to form salt bridges with Lys34 and Arg120 and a hydrogen bond with Tyr29. These electrostatic interactions are strong and serve to anchor the ligand molecule in its binding pocket . Although there is a significant difference in structure from ISP-1 to Siponimod , the remodeling process follows the principles of simplification and advantageous fragment replacement, leading to the generation of simplified lead compounds. Through structure-activity relationship studies, guided by structure-based drug design principles, and with the assistance of structural biology, the structure is further elaborated. These strategies demonstrate an effective approach for the transformation of active natural products by combining scaffold hopping, privileged fragment replacement, and guidance from structural biology information. 4.2. The “Statin” Drugs Mesvastatin ( 12 , ), which is isolated from the fermentation broth of the fungus Penicillium citrinum , was the first inhibitor discovered that targets hydroxymethylglutaryl-coenzyme A (HMG-CoA) reductase, the rate-limiting enzyme in cholesterol synthesis in the body. The chemical structure of lovastatin ( 13 , ) includes eight chiral carbons, with two within the upper lactone ring and six on the lower hexahydronaphthalene ring. It was introduced to the market in 1987 as a medication for lowering cholesterol levels . The lactone ring, formed by a dihydroxy acid, is an important pharmacophore feature, while the hexahydronaphthalene structure serves as a backbone and hydrophobic fragment crucial for enzyme binding, although the presence of chiral centers is not necessary. Subsequent “statin” drugs that entered the market retained the dihydroxy acid structure but underwent significant changes in the lower part of the molecule, which lacks chiral centers. Enzyme binding is primarily governed by hydrophobic-hydrophobic interactions. For instance, fluvastatin ( 15 , ), pitavastatin ( 16 , ), atorvastatin ( 17 , ), and rosuvastatin ( 18 , ) all share the same spatial orientation of the two hydroxyl groups in the dihydroxy acid fragment, but their structural backbones are transformed into indole, quinoline, pyrrole, and pyridine rings, respectively. These structural variations contribute to their lipid-lowering effects . Interestingly, despite the different structural types of synthetic statin drugs, they share similar binding modes. a illustrates the interactions of simvastatin and rosuvastatin with the amino acid residues. Compared to simvastatin, rosuvastatin utilizes aromatic rings with halogen and nitrogen atoms to replace the hexahydronaphthalene ring . The additional fluorophenyl motif and nitrogen atoms enhance the binding affinity between rosuvastatin and the target protein ( b) . The spatial arrangement and structure of compounds is critical for their biological activity. However, the construction of complex stereogenic structures in synthetic medicinal molecules differs fundamentally from the formation of stereoisomers in natural products. Stereogenic structures in natural products are formed via complex biosynthetic pathways, relying on several endogenous reactions and selective enzymes. In contrast, medicinal chemists can construct and control the stereogenic structure of synthetic molecules via chemical synthesis and purification techniques. Therefore, the methods for constructing stereogenic structures of synthetic medicinal molecules are fundamentally different from the way in which natural products form stereoisomers. The stereogenic structure of natural medicinal molecules is often constructed with unique stereochemistry through chiral sp3-hybridized carbon atoms or complex ring systems that happen to achieve binding specificity with proteins, generating biological activity. For example, the hexahydronaphthalene structure of type 1 “statin” drugs is formed by five chiral carbon atoms with sp3 hybridization, creating a unique stereoconfiguration of fused rings. It is difficult to synthesize analogs of this structure or make modifications or substitutions on this structure. It is also challenging to conduct comprehensive structure-activity relationship studies on this ring system. In type 2 molecules, the transformation of the hexahydronaphthalene structure into privileged structures such as quinolines and indoles proceeds by substituting the ester group with a benzene ring and incorporating a fluorine atom onto the benzene ring. Additionally, the methyl group can be substituted with a propyl or cyclopropyl group as desired . Regarding the four aromatic ring systems of Atorvastatin, its spatial configuration is achieved by connecting different aromatic rings and adjusting the dihedral angles between the rings, forming a unique stereostructure. Fragments are connected either through direct aromatic ring connections or by using amide bonds as linkers, which is a common connection method in the synthesis of drug molecules. Many classic organic named reactions, such as the Suzuki reaction, Buchwald–Hartwig reaction, Ullmann reaction, and others, efficiently enable the coupling between aromatic rings . Compared to the ester bond connection commonly found in natural products, the amide bond exhibits higher stability and inertness, making it relatively stable under changes in temperature and pH. It is less susceptible to acid-base hydrolysis and therefore exhibits good stability in biological systems. The stereoelectronic and donor-acceptor interactions of the amide bond can modulate the pharmacokinetic properties of the drug. The incorporation of amide bonds into the structure of drug molecules can improve their lipophilicity and hydrophilicity, allowing for better control of their absorption, distribution, and metabolic behavior. In the structure of rosuvastatin, more nitrogen atoms are introduced. Nitrogen atoms are typically capable of acting as hydrogen bond acceptors or donors. Due to their electronegative nature, nitrogen atoms are ideal candidates for forming hydrogen bonds with positively charged hydrogen atoms. Hydrogen bonding is one of the key ways in which drug molecules interact with biomacromolecules such as receptors or enzymes. It not only influences the activity, specificity, and affinity of the drug, but also alters its physicochemical and metabolic properties. Compared to atorvastatin, rosuvastatin displays enhanced hydrophilicity, resulting in reduced penetration of the blood-brain barrier. This characteristic prevents central nervous system stimulation and does not interfere with a patient’s sleep . In summary, privileged fragments concatenation are powerful techniques for constructing active drug skeletons and accelerating the synthesis and optimization of drug molecules. These methods allow for the replacement of natural scaffolds with controllable synthetic structures. These advantageous structures often consist of standardized chemical building blocks, enabling efficient creation of active compound libraries with similar structural features . The coupling of aromatic fragments is commonly used in chemical synthesis because it offers a fast and highly selective method for constructing versatile scaffolds with high yields. Aromatic pharmacophores are particularly suitable for substitution and fragment growth, especially when combined with small molecule-protein crystallography information . This combination enables precise adjustments in fragment growth and substitution, making it easier to explore structure-activity relationships. The substitution and modification have the potential to enhance therapeutic efficacy and reduce adverse reactions. 4.3. From Phloridzin to Dapagliflozin For 150 years, phlorizin ( 19 , ), a phloroglucinol glucoside, has been known to be present in the roots, stems, and fruit peels of fruit trees. Extensive research has been conducted to explore its potential as a medicinal agent and pharmacological tool. It has been discovered that phlorizin exerts a hypoglycemic effect by inhibiting the sodium-glucose co-transporter 2 (SGLT2) in the renal tubules, leading to the excretion of glucose in the urine and reductions in blood glucose levels. However, phlorizin also inhibits the sodium-glucose co-transporter 1 (SGLT1) in the intestinal mucosa, which limits its use as a drug and may cause side effects . Nevertheless, phlorizin serves as a valuable lead compound for further research. Structural modifications have been made to achieve the following objectives: (1) Elimination of the inhibitory effect on SGLT1 while improving selective inhibition of SGLT2; (2) Reduction or removal of phenolic hydroxyl groups to decrease phase II metabolism and prolong in vivo retention time; (3) Enhancement of the in vivo stability of compound glycosidic bonds . Between two aromatic rings of dihydrochalcone, there are four rotatable bonds. Reducing the number of single bonds can be beneficial in maintaining an active conformation and enhancing the activity . Through a series of explorations, it was discovered that the benzene rings could be connected by methylene groups. Compared to phlorizin , the structure of sergliflozin ( 20 , ) has reduced the number of freely rotating redundant atoms and increased the rigidity of the molecular framework, thereby enhancing the selectivity of the scaffold. However, sergliflozin still has stability problems and is not available as a drug . The glycosyl group is a pivotal pharmacophore; however, the O-glycosidic linkage exhibits poor metabolic stability as it is susceptible to cleavage mediated by β-glucosidase enzymes. Through a series of explorations, compound 21 was disclosed, which was characterized with C-aryl glucosides and meta-substituted diarylmethanes. This transformation sustained its activity and selectivity while bolstering its metabolic stability. Starting with compound 21 , a series of structure-activity relationship studies were conducted. Among the C-aryl glucosides compounds, dapagliflozin ( 22 , ) demonstrated exceptional stability and selectivity. It presented IC 50 values for SGLT2 and SGLT1 at 1.1 and 1390 nmol·L −1 , respectively, augmenting its selectivity more than a thousand-fold. Jointly developed by BMS and AstraZeneca, it advanced to Phase III clinical trials. It earned approval from the European Union in 2012, marking its debut as the inaugural SGLT2-targeting drug for type 2 diabetes . Simultaneously, a series of SGLT2 inhibitors were launched, such as canangliflozin ( 23 , ), empagliflozin ( 24 , ), and ipragliflozin ( 25 , ). These were independently developed in different companies. Those molecules almost have the same pharmacophore features and similar scaffolds . Tofoliflozin ( 26 , ) is a non-glucosides SGLT2 inhibitor and possess high selectivity and good absorption characteristics . During the early stages of development, due to the lack of a crystalline complex between phlorizin and the protein, the specific binding mode between the target protein and the compound was not well understood. Researchers primarily relied on traditional medicinal chemistry optimization methods such as scaffold hopping and structure-activity relationship studies to guide compound modifications. Although the detailed interaction details between phlorizin and the target protein could not be directly elucidated, researchers were still able to achieve excellent therapeutic effects through compound improvements. However, since 2022, several reports have emerged regarding the crystalline complexes between SGLT2 protein and small molecules, revealing the binding mode of these compounds with the protein . From the latest small molecule-protein co-crystallization analysis, despite the structural similarities between the natural compound phlorizin and the marketed drugs, they are actually not located in the same active pocket ( a). This work provides a framework for understanding the mechanism of the SGLT2 inhibitors and also develops a foundation for the future rational design and optimization of new inhibitors targeting these transporters. This finding provides new directions and opportunities for further research and enables scientists to explore non-glycoside SGLT2 inhibitors with different structural scaffolds. While glycoside-based SGLT2 inhibitors derived from root extracts have shown promising efficacy in medical practice, the development of non-glycoside SGLT2 inhibitors remains of significant scientific and commercial value. Compounds 29 , 30 , and 31 were identified through a ligand-based virtual screening strategy, combined with pharmacophore models and structural clustering analysis, as structurally novel non-glycoside SGLT2 inhibitors . The exploration of the structure optimization and structure-activity relationship of compounds 29 , 30 , and 31 is currently underway, and more researchers are expected to contribute to the development of novel SGLT2 inhibitors in the future. In summary, these advanced technologies significantly enhance the speed and efficiency of drug development. Advanced structural biology techniques such as cryo-electron microscopy can be used to determine the co-crystal structures of natural compounds and proteins, thereby identifying their binding sites, binding strength, binding modes, and effects on organisms. This contributes to our understanding of the mechanisms of action of active molecules in vivo and provides guidance for drug design and discovery. For compounds or targets that are difficult to co-crystallize, the alphafold technology can be utilized to predict the binding modes of small molecules with proteins. While there may be some deviation between the predicted results and the actual binding state, it still provides relevant information for each stage of modifying active natural products. 4.4. Structure Simplification Structure simplification serves as a strategic approach to optimize natural products. By reducing the complexity of natural product structures while maintaining or decreasing their activity, it makes them easier to synthesize and facilitates comprehensive exploration of structure-activity relationships for further studies. Take morphine, for example: none of the five chiral centers in morphine are essential for binding with opioid receptors. Although methadone ( 32 , ) and pethidine ( 33 , ) contain one chiral carbon, there is no difference in activity between their enantiomers. Fentanyl ( 34 , ) is a symmetrical molecule without chirality. Compared to the complex skeleton of morphine, fentanyl is simpler and easier to synthesize. The elimination of chiral centers and stereochemical configurations, which has made it feasible to explore a series of investigations on structure-activity relationships, solubility, absorption, and metabolism, is characteristic of this class of drugs with a simplified framework . Similarly, the cyclic structure of alkaloid cocaine ( 35 , ) is cleaved, simplifying its structure and allowing for the synthesis of non-chiral local anesthetics like procaine ( 36 , ), tetracaine ( 37 , ), and lidocaine ( 38 , ) . Physostigmine ( 39 , ) is a parasympathomimetic alkaloid and a reversible cholinesterase inhibitor. Due to its chemical instability in the body, modified compounds such as pyridostigmine bromide ( 40 , ) and neostigmine bromide ( 41 , ) have been developed. Unlike physostigmine, which is a tertiary amine, these synthetic quaternary ammonium salts have limited penetration into the central nervous system, resulting in a lower likelihood of adverse effects such as orthostatic hypotension. However, they still effectively improve muscle tone in patients with myasthenia gravis . The ultimate goal of modifying natural products is to develop active compounds into medicines. The transformation process from ISP-1 to siponimod ( 11 , a) serves as an example of natural product modification. The strategies and methods used in this process have significant implications for modifying other natural compounds ( a). ISP-1 is a natural product and it exhibits immunomodulatory effects by acting as a receptor for sphingosine-1-phosphate . I SP-1 possesses a complex structure with three chiral centers, a trans double bond, and both an amino group and a carboxyl group. However, due to its high toxicity and low solubility, I SP-1 cannot be used as a medication without further modifications or adaptations . To simplify the structure, reduce or eliminate chiral centers, improve activity, and enhance pharmacokinetics, compound 8 was selected as a lead compound for structural modifications. Through a series of transformations and optimizations, fingolimod ( 9 , a) was eventually developed. Fingolimod is a symmetrical molecule that has undergone modifications such as removal of the ketone group, trans double bond, and chiral carbons. This molecule lacks chiral and stereoisomeric factors and incorporates a benzene ring into the long chain, which reduces the number of saturated carbons and facilitates synthesis and conformational rigidity. Fingolimod was introduced to the market in 2010 for the treatment of multiple sclerosis . The success of fingolimod can be attributed to the substitution of the alkyl chain with an aromatic ring. Alkyl chains, being flexible in nature, can exist in various conformations, which may not favor the attainment of a “high concentration” of active conformations. Hence, incorporating factors that restrict conformational flexibility in the chain, such as replacing a portion of the saturated carbon chain with a phenyl ring, can offer advantages in terms of potency, pharmacokinetics, safety, and physicochemical properties . Fingolimod functions as a prodrug that is transformed into its active form by sphingosine kinase 2 in the liver after oral absorption. In light of this, a new compound called phosphorylated fingolimod ( 10 , a) was designed . In the subsequent stages of development, the researchers discovered Siponimod , a novel S1P1 receptor agonist that was developed by Novartis. This compound was designed by replacing the flexible lipid chain of fingolimod with a rigid aromatic ring and cyclohexane, and by introducing a trifluoromethyl group onto the aromatic ring. These structural modifications were implemented to enhance the selectivity of siponimod in its interaction with specific receptors, thereby potentially influencing its pharmacological activity . In initial structure-activity studies, the trifluoromethyl group on the benzene ring was found to significantly impact activity, boosting it by over 30 times compared to the unsubstituted hydrogen atom. Despite the structural differences between siponimod and fingolimod , they share similar molecular sizes and pharmacophore features, and their distribution patterns exhibit resemblances ( b). In terms of potency, siponimod exhibits an EC 50 value of 0.4 nM, while its EC 50 for the S1P3 receptor, where its activity is undesirable, is at 5 μM, thus demonstrating a high level of selectivity. Furthermore, studies conducted on monkeys have revealed an oral bioavailability of 71% for siponimod , with a plasma half-life (T 1/2 ) of 19 h . From ISP-1 to Siponimod , the transformation process involves simplifying the complex structure of a natural product with multiple chiral centers and a long flexible chain, which has poor pharmacological properties, into a small molecule chemical drug with a clearly defined drug-like structure. Several transformative strategies can be summarized as follows: (1) In the initial stage of remodeling, the structure is simplified by removing chiral centers as much as possible to reduce the difficulties in chemical synthesis. This allows the synthesis of a controllable drug scaffold for systematic structure-activity relationship studies; (2) For the long flexible liner alkyl chain, the advantage of introducing a pharmaceutically strong aromatic core is that it increases molecular rigidity, and the aromatic ring structure enables further structural modifications; (3) The flexible tail of Fingolimod is further replaced with a benzene ring and a cyclohexane while maintaining its hydrophobic properties. This not only enhances rigidity but also allows for additional modifications by introducing functional groups onto the benzene ring; (4) It can be rationalized using molecular modeling based upon solved S1P1crystal structure to do the optimization (PDB: 3V2Y). Based on the predicted binding mode, it appears that the carboxylic acid headgroup of siponimod is able to form salt bridges with Lys34 and Arg120 and a hydrogen bond with Tyr29. These electrostatic interactions are strong and serve to anchor the ligand molecule in its binding pocket . Although there is a significant difference in structure from ISP-1 to Siponimod , the remodeling process follows the principles of simplification and advantageous fragment replacement, leading to the generation of simplified lead compounds. Through structure-activity relationship studies, guided by structure-based drug design principles, and with the assistance of structural biology, the structure is further elaborated. These strategies demonstrate an effective approach for the transformation of active natural products by combining scaffold hopping, privileged fragment replacement, and guidance from structural biology information. Mesvastatin ( 12 , ), which is isolated from the fermentation broth of the fungus Penicillium citrinum , was the first inhibitor discovered that targets hydroxymethylglutaryl-coenzyme A (HMG-CoA) reductase, the rate-limiting enzyme in cholesterol synthesis in the body. The chemical structure of lovastatin ( 13 , ) includes eight chiral carbons, with two within the upper lactone ring and six on the lower hexahydronaphthalene ring. It was introduced to the market in 1987 as a medication for lowering cholesterol levels . The lactone ring, formed by a dihydroxy acid, is an important pharmacophore feature, while the hexahydronaphthalene structure serves as a backbone and hydrophobic fragment crucial for enzyme binding, although the presence of chiral centers is not necessary. Subsequent “statin” drugs that entered the market retained the dihydroxy acid structure but underwent significant changes in the lower part of the molecule, which lacks chiral centers. Enzyme binding is primarily governed by hydrophobic-hydrophobic interactions. For instance, fluvastatin ( 15 , ), pitavastatin ( 16 , ), atorvastatin ( 17 , ), and rosuvastatin ( 18 , ) all share the same spatial orientation of the two hydroxyl groups in the dihydroxy acid fragment, but their structural backbones are transformed into indole, quinoline, pyrrole, and pyridine rings, respectively. These structural variations contribute to their lipid-lowering effects . Interestingly, despite the different structural types of synthetic statin drugs, they share similar binding modes. a illustrates the interactions of simvastatin and rosuvastatin with the amino acid residues. Compared to simvastatin, rosuvastatin utilizes aromatic rings with halogen and nitrogen atoms to replace the hexahydronaphthalene ring . The additional fluorophenyl motif and nitrogen atoms enhance the binding affinity between rosuvastatin and the target protein ( b) . The spatial arrangement and structure of compounds is critical for their biological activity. However, the construction of complex stereogenic structures in synthetic medicinal molecules differs fundamentally from the formation of stereoisomers in natural products. Stereogenic structures in natural products are formed via complex biosynthetic pathways, relying on several endogenous reactions and selective enzymes. In contrast, medicinal chemists can construct and control the stereogenic structure of synthetic molecules via chemical synthesis and purification techniques. Therefore, the methods for constructing stereogenic structures of synthetic medicinal molecules are fundamentally different from the way in which natural products form stereoisomers. The stereogenic structure of natural medicinal molecules is often constructed with unique stereochemistry through chiral sp3-hybridized carbon atoms or complex ring systems that happen to achieve binding specificity with proteins, generating biological activity. For example, the hexahydronaphthalene structure of type 1 “statin” drugs is formed by five chiral carbon atoms with sp3 hybridization, creating a unique stereoconfiguration of fused rings. It is difficult to synthesize analogs of this structure or make modifications or substitutions on this structure. It is also challenging to conduct comprehensive structure-activity relationship studies on this ring system. In type 2 molecules, the transformation of the hexahydronaphthalene structure into privileged structures such as quinolines and indoles proceeds by substituting the ester group with a benzene ring and incorporating a fluorine atom onto the benzene ring. Additionally, the methyl group can be substituted with a propyl or cyclopropyl group as desired . Regarding the four aromatic ring systems of Atorvastatin, its spatial configuration is achieved by connecting different aromatic rings and adjusting the dihedral angles between the rings, forming a unique stereostructure. Fragments are connected either through direct aromatic ring connections or by using amide bonds as linkers, which is a common connection method in the synthesis of drug molecules. Many classic organic named reactions, such as the Suzuki reaction, Buchwald–Hartwig reaction, Ullmann reaction, and others, efficiently enable the coupling between aromatic rings . Compared to the ester bond connection commonly found in natural products, the amide bond exhibits higher stability and inertness, making it relatively stable under changes in temperature and pH. It is less susceptible to acid-base hydrolysis and therefore exhibits good stability in biological systems. The stereoelectronic and donor-acceptor interactions of the amide bond can modulate the pharmacokinetic properties of the drug. The incorporation of amide bonds into the structure of drug molecules can improve their lipophilicity and hydrophilicity, allowing for better control of their absorption, distribution, and metabolic behavior. In the structure of rosuvastatin, more nitrogen atoms are introduced. Nitrogen atoms are typically capable of acting as hydrogen bond acceptors or donors. Due to their electronegative nature, nitrogen atoms are ideal candidates for forming hydrogen bonds with positively charged hydrogen atoms. Hydrogen bonding is one of the key ways in which drug molecules interact with biomacromolecules such as receptors or enzymes. It not only influences the activity, specificity, and affinity of the drug, but also alters its physicochemical and metabolic properties. Compared to atorvastatin, rosuvastatin displays enhanced hydrophilicity, resulting in reduced penetration of the blood-brain barrier. This characteristic prevents central nervous system stimulation and does not interfere with a patient’s sleep . In summary, privileged fragments concatenation are powerful techniques for constructing active drug skeletons and accelerating the synthesis and optimization of drug molecules. These methods allow for the replacement of natural scaffolds with controllable synthetic structures. These advantageous structures often consist of standardized chemical building blocks, enabling efficient creation of active compound libraries with similar structural features . The coupling of aromatic fragments is commonly used in chemical synthesis because it offers a fast and highly selective method for constructing versatile scaffolds with high yields. Aromatic pharmacophores are particularly suitable for substitution and fragment growth, especially when combined with small molecule-protein crystallography information . This combination enables precise adjustments in fragment growth and substitution, making it easier to explore structure-activity relationships. The substitution and modification have the potential to enhance therapeutic efficacy and reduce adverse reactions. For 150 years, phlorizin ( 19 , ), a phloroglucinol glucoside, has been known to be present in the roots, stems, and fruit peels of fruit trees. Extensive research has been conducted to explore its potential as a medicinal agent and pharmacological tool. It has been discovered that phlorizin exerts a hypoglycemic effect by inhibiting the sodium-glucose co-transporter 2 (SGLT2) in the renal tubules, leading to the excretion of glucose in the urine and reductions in blood glucose levels. However, phlorizin also inhibits the sodium-glucose co-transporter 1 (SGLT1) in the intestinal mucosa, which limits its use as a drug and may cause side effects . Nevertheless, phlorizin serves as a valuable lead compound for further research. Structural modifications have been made to achieve the following objectives: (1) Elimination of the inhibitory effect on SGLT1 while improving selective inhibition of SGLT2; (2) Reduction or removal of phenolic hydroxyl groups to decrease phase II metabolism and prolong in vivo retention time; (3) Enhancement of the in vivo stability of compound glycosidic bonds . Between two aromatic rings of dihydrochalcone, there are four rotatable bonds. Reducing the number of single bonds can be beneficial in maintaining an active conformation and enhancing the activity . Through a series of explorations, it was discovered that the benzene rings could be connected by methylene groups. Compared to phlorizin , the structure of sergliflozin ( 20 , ) has reduced the number of freely rotating redundant atoms and increased the rigidity of the molecular framework, thereby enhancing the selectivity of the scaffold. However, sergliflozin still has stability problems and is not available as a drug . The glycosyl group is a pivotal pharmacophore; however, the O-glycosidic linkage exhibits poor metabolic stability as it is susceptible to cleavage mediated by β-glucosidase enzymes. Through a series of explorations, compound 21 was disclosed, which was characterized with C-aryl glucosides and meta-substituted diarylmethanes. This transformation sustained its activity and selectivity while bolstering its metabolic stability. Starting with compound 21 , a series of structure-activity relationship studies were conducted. Among the C-aryl glucosides compounds, dapagliflozin ( 22 , ) demonstrated exceptional stability and selectivity. It presented IC 50 values for SGLT2 and SGLT1 at 1.1 and 1390 nmol·L −1 , respectively, augmenting its selectivity more than a thousand-fold. Jointly developed by BMS and AstraZeneca, it advanced to Phase III clinical trials. It earned approval from the European Union in 2012, marking its debut as the inaugural SGLT2-targeting drug for type 2 diabetes . Simultaneously, a series of SGLT2 inhibitors were launched, such as canangliflozin ( 23 , ), empagliflozin ( 24 , ), and ipragliflozin ( 25 , ). These were independently developed in different companies. Those molecules almost have the same pharmacophore features and similar scaffolds . Tofoliflozin ( 26 , ) is a non-glucosides SGLT2 inhibitor and possess high selectivity and good absorption characteristics . During the early stages of development, due to the lack of a crystalline complex between phlorizin and the protein, the specific binding mode between the target protein and the compound was not well understood. Researchers primarily relied on traditional medicinal chemistry optimization methods such as scaffold hopping and structure-activity relationship studies to guide compound modifications. Although the detailed interaction details between phlorizin and the target protein could not be directly elucidated, researchers were still able to achieve excellent therapeutic effects through compound improvements. However, since 2022, several reports have emerged regarding the crystalline complexes between SGLT2 protein and small molecules, revealing the binding mode of these compounds with the protein . From the latest small molecule-protein co-crystallization analysis, despite the structural similarities between the natural compound phlorizin and the marketed drugs, they are actually not located in the same active pocket ( a). This work provides a framework for understanding the mechanism of the SGLT2 inhibitors and also develops a foundation for the future rational design and optimization of new inhibitors targeting these transporters. This finding provides new directions and opportunities for further research and enables scientists to explore non-glycoside SGLT2 inhibitors with different structural scaffolds. While glycoside-based SGLT2 inhibitors derived from root extracts have shown promising efficacy in medical practice, the development of non-glycoside SGLT2 inhibitors remains of significant scientific and commercial value. Compounds 29 , 30 , and 31 were identified through a ligand-based virtual screening strategy, combined with pharmacophore models and structural clustering analysis, as structurally novel non-glycoside SGLT2 inhibitors . The exploration of the structure optimization and structure-activity relationship of compounds 29 , 30 , and 31 is currently underway, and more researchers are expected to contribute to the development of novel SGLT2 inhibitors in the future. In summary, these advanced technologies significantly enhance the speed and efficiency of drug development. Advanced structural biology techniques such as cryo-electron microscopy can be used to determine the co-crystal structures of natural compounds and proteins, thereby identifying their binding sites, binding strength, binding modes, and effects on organisms. This contributes to our understanding of the mechanisms of action of active molecules in vivo and provides guidance for drug design and discovery. For compounds or targets that are difficult to co-crystallize, the alphafold technology can be utilized to predict the binding modes of small molecules with proteins. While there may be some deviation between the predicted results and the actual binding state, it still provides relevant information for each stage of modifying active natural products. Structure simplification serves as a strategic approach to optimize natural products. By reducing the complexity of natural product structures while maintaining or decreasing their activity, it makes them easier to synthesize and facilitates comprehensive exploration of structure-activity relationships for further studies. Take morphine, for example: none of the five chiral centers in morphine are essential for binding with opioid receptors. Although methadone ( 32 , ) and pethidine ( 33 , ) contain one chiral carbon, there is no difference in activity between their enantiomers. Fentanyl ( 34 , ) is a symmetrical molecule without chirality. Compared to the complex skeleton of morphine, fentanyl is simpler and easier to synthesize. The elimination of chiral centers and stereochemical configurations, which has made it feasible to explore a series of investigations on structure-activity relationships, solubility, absorption, and metabolism, is characteristic of this class of drugs with a simplified framework . Similarly, the cyclic structure of alkaloid cocaine ( 35 , ) is cleaved, simplifying its structure and allowing for the synthesis of non-chiral local anesthetics like procaine ( 36 , ), tetracaine ( 37 , ), and lidocaine ( 38 , ) . Physostigmine ( 39 , ) is a parasympathomimetic alkaloid and a reversible cholinesterase inhibitor. Due to its chemical instability in the body, modified compounds such as pyridostigmine bromide ( 40 , ) and neostigmine bromide ( 41 , ) have been developed. Unlike physostigmine, which is a tertiary amine, these synthetic quaternary ammonium salts have limited penetration into the central nervous system, resulting in a lower likelihood of adverse effects such as orthostatic hypotension. However, they still effectively improve muscle tone in patients with myasthenia gravis . The scaffolds derived from natural products can be dissected into simpler and more easily synthesized fragment-like scaffolds either manually or computationally . These scaffolds inherit distinct conformational and physicochemical features from the original natural product templates, making them suitable for exploring chemically relevant space with biological activity. The general design strategy follows a flow-process diagram, involving the gradual simplification of the structural complexity of the parent compound (natural product) into virtual fragments, resulting in the formation of small and chemically appealing scaffolds ( a). Natural products have provided inspiration, while computer programs have enhanced the efficiency of rational molecular design. Koch et al. designed a compound library from the marine product dibromo-dysidiolide, where 19% of the compounds showed activity as inhibitors of 11β-hydroxysteroid dehydrogenase type I ( b) . Similarly, adopting the concept of extracting scaffolds, Wetzel released the Scaffold Hunter software ( https://scaffoldhunter.sourceforge.net ) in 2009, which employs deconvolution analysis of structurally complex natural products to obtain virtual skeletal trees, thus making the chemical structure data of complex bioactive substances more intuitive . The structural classification of natural products (SCONP) is an organizing principle for charting the known chemical space explored by nature. SCONP arranges the scaffolds of the natural products in a tree-like fashion and provides a viable analysis and hypothesis-generating tool for the design of natural product-derived compound collections . Compared to high-throughput screening, this approach has a higher hit rate. However, the activity of compounds obtained through molecular design based on natural product fragments often ranges from weak to moderate, making subsequent structural optimization an indispensable step in improving activity. The discovery of early natural active drugs occurred before a complete understanding of disease mechanisms, and researchers initially relied on animal pathological models for drug screening. Later, with advancements in molecular pathology, studies began to focus on specific enzymes or proteins as targets for drug screening. Due to the development of computer technology, bioinformatics, and structural genomics, an increasing number of important protein targets and their crystal structures, such as adrenergic receptors , potassium ion channels , and sodium-calcium exchangers , have been resolved. This has made it possible to use crystallographic structures for the screening and design of natural active compounds. Unlike traditional high-throughput screening methods based on cells and enzymes, virtual screening based on protein structures can greatly shorten the time and reduce the cost of obtaining active natural products . After obtaining the active compounds through virtual screening, further activity experiments can be conducted to verify the results. Combining computational simulations with experimental results, especially the mutual verification of structure-activity relationships and computational simulation results, can guide the next step of drug design . With the clear structure of drug target proteins, computer simulations can be used to simulate the binding of drug molecules to target proteins, and even obtain corresponding complex crystal structures directly. This provides information for the targeted modification of drug molecules. Therefore, virtual screening based on protein structures and the resolution of complex crystal structures provide a faster and more effective way to obtain and optimize active natural compounds. Currently, the main structural biology techniques for obtaining protein target structures include X-ray crystallography , nuclear magnetic resonance (NMR) spectroscopy , and cryo-electron microscopy (cryo-EM) three-dimensional reconstruction . However, these methods may not be applicable to some complex proteins, which limits the study of many important proteins and drug design. To overcome this limitation, DeepMind has developed a protein structure prediction software based on neural networks, known as Alphafold ( https://alphafold.com ). Alphafold uses large-scale protein structure databases for training and can accurately predict protein folding, secondary structure, domain contacts, and other information. This technology can quickly and accurately predict protein structures, which is very useful for drug design and virtual screening . By predicting protein structures, researchers can use computer simulations to predict the interaction between drug molecules and target proteins, and thus predict the inhibitory activity, affinity, and selectivity of drugs. This method can accelerate the drug development process and provide more targeted drug design strategies. Alphafold’s high accuracy in protein structure prediction has demonstrated its advantages. However, for some complex proteins or protein complexes, there are still challenges that need to be further improved and developed. Protein structure databases can also provide references for drug design and virtual screening, helping scientists quickly find drug candidates related to specific targets. There are multiple protein structure databases worldwide; PDB was established by the Brookhaven National Laboratory in the United States in 1971. It is a molecular structure database and an open-source database that is currently maintained by the Research Collaboration for Structural Bioinformatics (RCSB). It is considered the most important database in the field of structural biology . The majority of its data comes from experimentally determined three-dimensional structures of biomolecules, including proteins, as well as some nucleic acids, sugars, and complexes formed by nucleic acids and proteins. In the drug screening process, the quantity and quality of compounds in the library are crucial. These compounds need to be representative and be able to cover as many active chemical skeleton types as possible. Some common natural product databases are shown in , and some of these databases overlap with each other. The following introduces several well-known databases. The initial compounds obtained from virtual screening are subsets of the compound databases used in the screening process and require further experimental validation, such as compound activity and binding affinity to target proteins. Molecular docking prediction involves estimating the binding free energy, i.e., the binding affinity, between small molecule compounds and target proteins. However, for enzymes and receptors, predicting binding affinity alone is insufficient, and more specific experiments are needed to determine whether the small molecule compounds act as inhibitors (antagonists) or activators. The following are several techniques that directly measure the binding affinity between small molecules and target proteins. For example, microscale thermophoresis (MST), isothermal titration calorimetry (ITC), and surface plasmon resonance (SPR). MST measures the strength and rate of binding between small molecule compounds and target proteins using a microscale heat gradient . ITC determines the binding constants and thermodynamic parameters by measuring the heat released or absorbed during the interaction between small molecule compounds and target proteins . SPR quantifies the binding affinity and kinetic parameters by monitoring changes in reflected light caused by the binding of small molecule compounds to target proteins . Natural products are secondary metabolites produced by organisms for their own growth and propagation, rather than being specifically designed for the treatment of human diseases. However, due to their inherent activity and limitations in terms of pharmaceuticals, structural modifications are necessary. These modifications should be personalized and tailored to address the specific properties and limitations of the natural products under study, with the aim of optimizing therapeutic efficacy, pharmacokinetics, safety, and biopharmaceutical characteristics. Looking at successful examples of natural products evolving into drugs, the extent of structural changes can vary from drastic transformations to minor alterations involving only a few atoms or functional groups. While there is no fixed pattern, the underlying principles and concepts of structural modification remain consistent. Refining their structures through these modifications aims to enhance activity potency and selectivity. Modern technologies such as cryo-electron microscopy (cryo-EM), AlphaFold, and natural compound databases have revolutionized the field of drug discovery and development, offering exciting new possibilities for modifying bioactive natural products. By utilizing these technologies and resources, researchers can use structure-based drug design in medicinal chemistry to optimize bioactive natural products and improve various aspects of their properties, ultimately developing safer and more effective natural drugs. |
Comparing the effectiveness of different | 1c01f6b5-c382-4082-82ac-a6eda64c5a94 | 11586505 | Microbiology[mh] | With the recent advances in high‐throughput ‘omics’ techniques, sequencing technologies have emerged as key tools for studying microbial communities. To date, 16S rRNA amplicon sequencing is by far the most widely used approach to investigate microbial composition dynamics (Starke et al., ). It has provided valuable insights into microbial diversity across various environments (Ainsworth et al., ; Bachran et al., ; Lopez‐Fernandez et al., ; Van Eesbeeck et al., ). However, it is well‐known that the outcome of 16S rRNA amplicon sequencing can be affected by the DNA extraction method, primer design, PCR amplification, sequencing artefacts and bioinformatics analysis (Costea et al., ; Karst et al., ; Kebschull & Zador, ; Zielinska et al., ). Consequently, comparing results obtained using different methods is not always straightforward (Abellan‐Schneyder et al., ). Additionally, special care must be taken when dealing with low‐biomass samples, because they are highly susceptible to contamination during sample preparation, DNA extraction and subsequent manipulations (Salter et al., ). One example of such a challenging low‐biomass environment is bentonite clay, which is considered as the backfill material in engineered barriers for the geological disposal of nuclear waste in many countries (Sellin & Leupin, ). Depending on the conditions, bentonite might contain 10 2 –10 6 CFU/g in total and ~10 6 viable cells/g (Burzan et al., ; Engel, Ford, et al., ; Stroes‐Gascoyne et al., ; Vachon et al., ). This is several orders of magnitude lower than, for example, soils, which can harbour up to 10 10 bacterial cells/g (Raynaud & Nunan, ). The highly compacted bentonite buffer in a geological waste repository is expected to limit microbial activity due to its high swelling pressure and low water activity (Pedersen et al., ; Stroes‐Gascoyne et al., ). However, several in situ experiments demonstrated the persistence of microorganisms within bentonite (Burzan et al., ; Chi Fru & Athar, ; Engel, Ford, et al., ). Moreover, a significant number of bacterial cells are expected to remain viable under the harsh conditions anticipated after repository closure, which include increased pressure, heat and irradiation (Haynes et al., ). Microbially‐influenced corrosion by sulfate‐reducing bacteria (SRB) is one of the primary microbial processes of concern regarding the geological disposal of nuclear waste (King et al., ). To gain a comprehensive understanding of the impact of microbial processes during geological disposal of nuclear waste, recent studies have employed a combination of cultivation‐dependent and cultivation‐independent approaches (Bartak et al., ; Beaver et al., ; Burzan et al., ; Vachon et al., ). However, an additional challenge of working with clay‐rich samples is that they can hamper the efficiency of several standard cultivation‐independent methods. Clay particles are known to tightly adsorb organic and inorganic phosphorous compounds (Cai et al., ). Since the DNA backbone is rich in phosphate, DNA molecules tend to adhere to clay adsorption sites. This also includes DNA released after cell lysis which can be adsorbed on clay particle surfaces before the DNA extraction procedure is finalized. Consequently, this significantly hinders the efficiency of DNA extraction (Frostegård et al., ). On the other hand, the high adsorption capacity enables clay particles to preserve DNA molecules over long time periods (Frostegård et al., ; Romanowski et al., ). An additional difficulty with DNA extractions from bentonite is its swelling upon addition of lysis buffer, hindering the complete release of microbial cells from the bentonite matrix (Povedano‐Priego et al., ). A possible solution is indirect DNA extraction methods where intact cells are recovered from the sample before lysis and DNA extraction (Högfors‐Rönnholm et al., ). However, these methods introduce bias because separation treatments exhibit varying efficiencies depending on the type of microorganisms involved (Holmsgaard et al., ), albeit rather limited in some studies (Courtois et al., ; Delmont et al., ). Nevertheless, direct DNA extraction methods are preferred and are used most frequently. To overcome the limitations of direct methods from clay‐rich samples, blocking agents such as skim milk are often added. However, their usage may inadvertently introduce varying concentrations of contaminating DNA (Ikeda et al., ; Takada‐Hoshino & Matsumoto, ). To date, a few DNA extraction methods have been published for clay; however, they have either been validated with spiking DNA of a single strain (Engel, Coyotzi, et al., ), cells of only two strains (Stone et al., ) or not validated at all (Chi Fru & Athar, ; Lopez‐Fernandez et al., ; Povedano‐Priego et al., ). Validation using a mock community is essential, as applying different DNA extraction methods to clay samples has resulted in significant variation in the outcomes of 16S rRNA amplicon sequencing (Mijnendonckx et al., ). Therefore, a more standardized research methodology is needed to study microbial communities in low biomass, clay‐rich environments. Two distinct DNA extraction methods (Engel, Coyotzi, et al., ; Povedano‐Priego et al., ) have shown suitability for clay samples (Mijnendonckx et al., ). The method described by Engel, Coyotzi, et al. is a kit‐based approach involving a combination of sodium dodecyl sulfate (SDS)‐based and mechanical lysis, followed by DNA binding and washing on a silica column. On the other hand, the method proposed by Povedano‐Priego et al. includes pre‐treatment with phosphate buffer and glass beads, chemical and enzymatic lysis using a cocktail containing polyvinylpyrrolidone (PVP), SDS, proteinase K, lysozyme and mechanical/thermal shocks, followed by DNA precipitation with agents including phenol and chloroform. Both protocols offer an optional additional step for DNA concentration. Validation of the two methods was previously only performed with one replicate of Opalinus Clay spiked with a mock community (Mijnendonckx et al., ). Here, we performed a detailed inter‐laboratory comparison of these two DNA extraction methods with slight laboratory‐specific modifications on sterilized Wyoming MX‐80 bentonite amended with two defined mock communities. Our objective was to compare the performance of both DNA extraction approaches by means of evaluating DNA yields, amplifiability of obtained DNA, the community profile and the presence of contaminants in the context of the clay and the methods considered. Mock communities We selected two commercial microbial community standards consisting of DNA or intact cells of three Gram‐negative bacteria, five Gram‐positive bacteria and two yeast species with varying size and cell wall recalcitrance (Table ; ZymoBIOMICS Mock Community standards, Zymo Research Corporation, Irvine, USA). Mock 1 (D6300) contains a total cell concentration of ca. 1.4 × 10 10 cells/mL and a linear distribution of each of the species, whereas Mock 2 (D6310) contains ca. 1.5 × 10 9 cells/mL with a logarithmic distribution of the different strains (Figure ). Bentonite Wyoming MX‐80 bentonite was provided by the National Cooperative for the Disposal of Radioactive Waste (NAGRA), sterilized in the same facility, and used by all three participating labs. Sterility was achieved by gamma irradiation with a total dose of 50 kGy. Sterility validation was performed by cultivation in each lab. To this end, 5 mL PBS was supplemented to 0.5 g MX‐80 and stirred for 30 min. Afterwards, 100 μL of a 1/10 dilution series was cultivated in liquid and solid R2A medium (Reasoner & Geldreich, ) and incubated at 30°C in aerobic for 3days and anaerobic conditions for 3 weeks. In addition, the presence of SRB was probed by cultivation in modified Postgate's B medium (Schwartz, ) and incubation at 30°C under anoxic conditions for 28 days. Experimental setup Two grams of sterile MX‐80 bentonite was supplemented with 12.5 mL Phosphate‐buffered saline (PBS), and either 75 μL of Mock 1 or Mock 2 or none (control). All setups were performed in triplicate. Samples were thoroughly mixed by vortexing and incubated for 3 days at 4°C to enable water absorption into and cell interaction with bentonite. After the incubation step, samples were centrifuged at 11,000 rpm for 10 min. The supernatant was discarded and the pellet was used for following DNA extraction methods. To elucidate the possible bias introduced by the presence of bentonite on DNA recovery, extractions were also performed on 75 μL of both mock communities. DNA extraction methods Three independent laboratories used distinct DNA extraction methods to have multiple independent replicates for validation of the two methods. Lab 1 used the kit‐based approach (Engel et al., ), Lab 2, the method based on phenol‐chloroform extraction (Povedano‐Priego et al., ) and Lab 3 applied both extraction methods. However, there were slight modifications to both original protocols which are described below. Lab 1⸺Kit‐based Lab 1 used the protocol by Engel, Coyotzi, et al. based on the DNeasy® PowerMax® Soil Kit (Qiagen, Germany) with minor modifications. Briefly, 15 mL PowerBead solution was added to each bentonite pellet and the sample was vortexed for 1 min. After addition of 1.2 mL lysis solution, samples were vortexed vigorously for 30 s followed by vortexing in a horizontal vortex adapter at maximum speed for 10 min. Subsequently, the samples were placed in a shaking water bath at 65°C for 30 min. Afterwards, the manufacturer's protocol was followed and the DNA was eluted into 1 mL elution buffer (10 mM Tris). The extracted DNA was further purified and concentrated using the Genomic DNA Clean & Concentrator™ Kit (Zymo Research, USA) following the manufacturer's protocol to obtain 50 μL as the final volume. The extracted DNA was subsequently quantified using a Qubit 2.0 fluorometer (Invitrogen, Life Technologies, USA) according to the manufacturer's protocol. Lab 2⸺Phenol‐chloroform Lab 2 performed the extractions following the optimized protocol for total DNA isolation from bentonite as previously described by Povedano‐Priego et al. . The bentonite pellet obtained after centrifugation was distributed in portions of 0.3 g in individual 2‐mL screw‐cap micro‐centrifuge tubes. This protocol consists of a pre‐treatment using 400 μL Na 2 HPO 4 (0.12 M, pH 8.0) followed by chemical and enzymatic lysis by the addition of 600 μL of lysis buffer (100 mM Tris–HCl [pH 8.0], 100 mM EDTA [pH 8.0], 100 mM NaCl, 1% PVP and 2% SDS), 24 μL freshly made lysozyme (10 mg/mL) and 2.5 μL proteinase K (20 mg/mL) to each tube. Mechanical lysis was performed twice using a FastPrep® FP120 (MP Biomedicals) bead‐beater at 5.5 m s −1 for 45 s. Afterwards, samples were incubated at 37°C for 30 min first and then at 60°C for 1 h. Then, samples were centrifuged at 14,000 g for 5 min and all supernatants for the same sample were pooled in a 15 mL tube. An additional mechanical lysis step was performed with the bentonite pellet using 1 mL lysis buffer, followed by another centrifugation step. One volume of phenol:chloroform:isoamyl alcohol (25:24:1 v/v) was added to the tubes and centrifuged at 1500 g for 10 min at 4°C. This step was followed by a modification of the protocol described by Povedano‐Priego et al. : the supernatants were transferred to a new tube and one volume of chloroform was added and mixed. Tubes were again centrifuged. Afterwards, the next steps were performed following the extraction method in Povedano‐Priego et al. . Total DNA was resuspended in 35 μL milli‐Q water and quantified on a Qubit 3.0 Fluorometer (Life Technologies) and stored at −20°C until further processing. Lab 3⸺Kit‐based Lab 3 followed the protocol of Lab 1 with some minor modifications. After incubation in a water bath at 65°C for 30 min, samples were homogenized by vortexing the tubes 10 min at maximum speed with a Vortex adapter cat 13000‐V1 (Qiagen, the Netherlands). Then, we followed the same protocol until the elution step. Purified DNA was eluted in 2.3 mL of the provided elution buffer (10 mM Tris). Nucleic acids were precipitated using 4 μL/mL Genelute‐LPA (25 mg/mL; Sigma‐Aldrich, Belgium), 0.1 volumes of 5 M NaCl, 1 volume of isopropanol, gently mixed by inverting the tubes and stored at −20°C overnight. Precipitated DNA was pelleted by centrifugation at 13,000 g for 30 min at 4°C and then washed with 80% ice‐cold ethanol (stored at −20°C). Finally, pellets were air‐dried in a laminar flow for 15 min and finally suspended in 125 μL of elution buffer (10 mM Tris). DNA concentration was measured with the Quantifluor dsDNA sample kit (Promega, the Netherlands). Lab 3⸺Phenol‐chloroform The procedure of Povedano‐Priego et al. was followed with some modifications. Mechanical lysis was performed using a TissueLyser II (Qiagen, Belgium) for 10 min at 30 Hz. In addition, after the extraction with phenol:chloroform:isoamyl alcohol (25:24:1 v/v), the upper (aqueous) phase was transferred to a new tube and washed by adding one volume of chloroform:isoamyl alcohol (1:1 v/v). Tubes were again centrifuged at 1,500 g for 10 min at 4°C and the supernatants were transferred to a new tube. Afterwards, DNA was precipitated by adding 1 volume of 75% isopropanol and 1/10 volume of 3 M sodium acetate (pH 5.3) and overnight incubation at −20°C. Afterwards, the sample was centrifuged 30 min at 5,000 g at 4°C, the pellet was washed with 5 mL of an 80% ice‐cold ethanol solution (stored at −20°C) and centrifuged for 5 min at 10,000 g. The supernatant was discarded and the pellet was dried overnight at 30°C. Finally, all DNA pellets obtained for one replicate were pooled and dissolved in 500 μL milli‐Q water. Subsequently, the sample was applied on a 100 kDa Amicon filter unit (Merck, Belgium) and centrifuged for 10 min at 14,000 g. The pellet was washed twice with 500 μL milli‐Q water. Finally, the pellet was eluted by centrifugation for 2 min at 1,500 g. DNA concentration was measured with the Quantifluor dsDNA sample kit (Promega, the Netherlands). 16S rRNA amplicon sequencing All DNA samples were sent to Lab 3 where all the PCRs were performed to minimize variability that could be introduced by that step. The V3‐V4 region of the 16S rRNA gene was amplified with primers 341F (5′‐CCTACGGGNGGCWGCAG‐3′) and 785R (5′‐GGACTACHVGGGTATCTAATCC‐3′) (Klindworth et al., ) using the Phusion High‐Fidelity Polymerase (Thermofisher Scientific, Belgium). Primers contained an Illumina adapter overhang sequence: 5′‐TCGTCGGCAGCGTCAGATGTGTATAAGAGACAG‐3′ for the forward primer and 5′‐GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAG‐3′ for the reverse primer. PCR conditions were as follows: 1 min at 98°C followed by 30 cycles of 10 s at 98°C, 30 s at 62°C and 1 min at 72°C followed by a final extension of 10 min at 72°C. Five nanogram was used as DNA template in all samples except when the concentration was too low, where 5 μL was used. Initially, 70 samples were used for PCR amplification, including 48 samples spiked with a Mock community, 12 sterile bentonite samples, two negative kit controls, two no‐template PCR controls (NTC) and three replicates of each Mock community standard consisting of DNA instead of intact cells, processed according to the manufacturer's recommendations (Table ; ZymoBIOMICS Mock Community standards D6306 and D6311, Zymo Research Corporation, Irvine, USA) to examine possible PCR bias. PCR results were evaluated via gel electrophoresis by loading 5 μL (at least 250 ng) of each sample onto a 1% agarose gel. PCR products from samples that were positive were purified with the Wizard® SV Gel and PCR Clean‐Up System (Promega, The Netherlands) according to the manufacturer's protocol. Samples with a DNA yield above the detection limit but with a negative result after PCR amplification were further purified by either heating to 50°C for 1 h, diluting 20, 40 or 80 times, drop dialysis or on a 100‐kDa Amicon® Ultra filter device (Merck, Belgium). For drop dialysis, a standard Petri dish was half‐filled with milli‐Q water. A nitrocellulose membrane (pore size 0.025 μm, diameter 25 mm, Merck, Belgium) was floated on the water. The aliquot of the DNA sample was pipetted on the membrane and left to dialyze for 1 h. Afterwards, the sample was recovered from the top of the membrane. Purification on a 100‐kDa Amicon® Ultra filter device (Merck, Belgium) was performed by applying the sample on the column and centrifuging it for 10 min at 14,000 g. The sample was washed twice with 500 μL milli‐Q water. To recover the concentrated sample, the Amicon® filter was placed upside down in a clean microcentrifuge tube and centrifuged for 2 min at 1,500 g. All samples were sequenced on the Illumina MiSeq platform according to the manufacturer guidelines at BaseClear B.V (the Netherlands). Bioinformatics and statistical analyses Primers were first removed from the 16S rRNA gene amplicon sequencing data using cutadapt (Martin, ). Subsequently, raw reads were processed according to the DADA2 pipeline with recommended settings (Callahan et al., ). Briefly, reads with ambiguous, poor‐quality bases and more than two expected errors were discarded. The paired reads were merged, chimeras were identified and removed. Only amplicon sequence variants (ASV) with more than two reads were retained. Taxonomy was assigned to the ASVs using the naive Bayesian classifier method implemented in DADA2 with the Silva taxonomic training dataset (version 132) as a reference (Callahan, ). Potential contaminant ASVs were identified through the Decontam (v.1.6.0) R package (Davis et al., ). Specifically, the ‘combined’ method was used, where frequency and prevalence probabilities are combined with Fisher's method and used to identify contaminants. For samples with a DNA concentration below the detection limit but a positive sequencing result, the detection limit of the DNA measurement method was used to calculate the total amount of DNA in the sample. With a probability threshold of 0.5, we identified 10 contaminant ASVs and excluded them from subsequent analyses. The 16S rRNA amplicon sequencing data were further analysed in R version 4.3.0 with the R package phyloseq (McMurdie & Holmes, ). Subsampling was performed based on the lowest number of reads obtained over the different samples amended with a Mock community, that is, a coverage of 8403 reads. Rarefaction curves indicate that this level of subsampling adequately represented the bacterial diversity in the samples (Figure ). The package chkMocks was used to compare the composition obtained in each condition with the theoretically expected composition (Sudarshan et al., ). The β‐diversity was calculated by non‐metric multidimensional scaling (NMDS) with Bray‐Curtis distances with the command ‘ordinate’. Afterwards, a distance matrix of these data was calculated with the command ‘distance’. This distance matrix was used to perform a permutation test for homogeneity of multivariate dispersions using the command ‘betadisper’ in the package vegan (Oksanen et al., ). Permutational multivariate analysis of variance (PERMANOVA) using the ‘adonis’ at 999 permutations and α = 0.05 were performed to test whether there was a difference between the DNA extraction methods and if the bentonite had an effect on the outcome. Pairwise multilevel comparison on microbial community structure between samples was performed with the R package pairwiseAdonis with FDR (False Discovery Rate) correction to the p ‐values. The datasets generated and analysed during this study are available in the NCBI Sequence Read Archive (SRA) repository (PRJNA1054184). We selected two commercial microbial community standards consisting of DNA or intact cells of three Gram‐negative bacteria, five Gram‐positive bacteria and two yeast species with varying size and cell wall recalcitrance (Table ; ZymoBIOMICS Mock Community standards, Zymo Research Corporation, Irvine, USA). Mock 1 (D6300) contains a total cell concentration of ca. 1.4 × 10 10 cells/mL and a linear distribution of each of the species, whereas Mock 2 (D6310) contains ca. 1.5 × 10 9 cells/mL with a logarithmic distribution of the different strains (Figure ). Wyoming MX‐80 bentonite was provided by the National Cooperative for the Disposal of Radioactive Waste (NAGRA), sterilized in the same facility, and used by all three participating labs. Sterility was achieved by gamma irradiation with a total dose of 50 kGy. Sterility validation was performed by cultivation in each lab. To this end, 5 mL PBS was supplemented to 0.5 g MX‐80 and stirred for 30 min. Afterwards, 100 μL of a 1/10 dilution series was cultivated in liquid and solid R2A medium (Reasoner & Geldreich, ) and incubated at 30°C in aerobic for 3days and anaerobic conditions for 3 weeks. In addition, the presence of SRB was probed by cultivation in modified Postgate's B medium (Schwartz, ) and incubation at 30°C under anoxic conditions for 28 days. Two grams of sterile MX‐80 bentonite was supplemented with 12.5 mL Phosphate‐buffered saline (PBS), and either 75 μL of Mock 1 or Mock 2 or none (control). All setups were performed in triplicate. Samples were thoroughly mixed by vortexing and incubated for 3 days at 4°C to enable water absorption into and cell interaction with bentonite. After the incubation step, samples were centrifuged at 11,000 rpm for 10 min. The supernatant was discarded and the pellet was used for following DNA extraction methods. To elucidate the possible bias introduced by the presence of bentonite on DNA recovery, extractions were also performed on 75 μL of both mock communities. extraction methods Three independent laboratories used distinct DNA extraction methods to have multiple independent replicates for validation of the two methods. Lab 1 used the kit‐based approach (Engel et al., ), Lab 2, the method based on phenol‐chloroform extraction (Povedano‐Priego et al., ) and Lab 3 applied both extraction methods. However, there were slight modifications to both original protocols which are described below. Lab 1⸺Kit‐based Lab 1 used the protocol by Engel, Coyotzi, et al. based on the DNeasy® PowerMax® Soil Kit (Qiagen, Germany) with minor modifications. Briefly, 15 mL PowerBead solution was added to each bentonite pellet and the sample was vortexed for 1 min. After addition of 1.2 mL lysis solution, samples were vortexed vigorously for 30 s followed by vortexing in a horizontal vortex adapter at maximum speed for 10 min. Subsequently, the samples were placed in a shaking water bath at 65°C for 30 min. Afterwards, the manufacturer's protocol was followed and the DNA was eluted into 1 mL elution buffer (10 mM Tris). The extracted DNA was further purified and concentrated using the Genomic DNA Clean & Concentrator™ Kit (Zymo Research, USA) following the manufacturer's protocol to obtain 50 μL as the final volume. The extracted DNA was subsequently quantified using a Qubit 2.0 fluorometer (Invitrogen, Life Technologies, USA) according to the manufacturer's protocol. Lab 2⸺Phenol‐chloroform Lab 2 performed the extractions following the optimized protocol for total DNA isolation from bentonite as previously described by Povedano‐Priego et al. . The bentonite pellet obtained after centrifugation was distributed in portions of 0.3 g in individual 2‐mL screw‐cap micro‐centrifuge tubes. This protocol consists of a pre‐treatment using 400 μL Na 2 HPO 4 (0.12 M, pH 8.0) followed by chemical and enzymatic lysis by the addition of 600 μL of lysis buffer (100 mM Tris–HCl [pH 8.0], 100 mM EDTA [pH 8.0], 100 mM NaCl, 1% PVP and 2% SDS), 24 μL freshly made lysozyme (10 mg/mL) and 2.5 μL proteinase K (20 mg/mL) to each tube. Mechanical lysis was performed twice using a FastPrep® FP120 (MP Biomedicals) bead‐beater at 5.5 m s −1 for 45 s. Afterwards, samples were incubated at 37°C for 30 min first and then at 60°C for 1 h. Then, samples were centrifuged at 14,000 g for 5 min and all supernatants for the same sample were pooled in a 15 mL tube. An additional mechanical lysis step was performed with the bentonite pellet using 1 mL lysis buffer, followed by another centrifugation step. One volume of phenol:chloroform:isoamyl alcohol (25:24:1 v/v) was added to the tubes and centrifuged at 1500 g for 10 min at 4°C. This step was followed by a modification of the protocol described by Povedano‐Priego et al. : the supernatants were transferred to a new tube and one volume of chloroform was added and mixed. Tubes were again centrifuged. Afterwards, the next steps were performed following the extraction method in Povedano‐Priego et al. . Total DNA was resuspended in 35 μL milli‐Q water and quantified on a Qubit 3.0 Fluorometer (Life Technologies) and stored at −20°C until further processing. Lab 3⸺Kit‐based Lab 3 followed the protocol of Lab 1 with some minor modifications. After incubation in a water bath at 65°C for 30 min, samples were homogenized by vortexing the tubes 10 min at maximum speed with a Vortex adapter cat 13000‐V1 (Qiagen, the Netherlands). Then, we followed the same protocol until the elution step. Purified DNA was eluted in 2.3 mL of the provided elution buffer (10 mM Tris). Nucleic acids were precipitated using 4 μL/mL Genelute‐LPA (25 mg/mL; Sigma‐Aldrich, Belgium), 0.1 volumes of 5 M NaCl, 1 volume of isopropanol, gently mixed by inverting the tubes and stored at −20°C overnight. Precipitated DNA was pelleted by centrifugation at 13,000 g for 30 min at 4°C and then washed with 80% ice‐cold ethanol (stored at −20°C). Finally, pellets were air‐dried in a laminar flow for 15 min and finally suspended in 125 μL of elution buffer (10 mM Tris). DNA concentration was measured with the Quantifluor dsDNA sample kit (Promega, the Netherlands). Lab 3⸺Phenol‐chloroform The procedure of Povedano‐Priego et al. was followed with some modifications. Mechanical lysis was performed using a TissueLyser II (Qiagen, Belgium) for 10 min at 30 Hz. In addition, after the extraction with phenol:chloroform:isoamyl alcohol (25:24:1 v/v), the upper (aqueous) phase was transferred to a new tube and washed by adding one volume of chloroform:isoamyl alcohol (1:1 v/v). Tubes were again centrifuged at 1,500 g for 10 min at 4°C and the supernatants were transferred to a new tube. Afterwards, DNA was precipitated by adding 1 volume of 75% isopropanol and 1/10 volume of 3 M sodium acetate (pH 5.3) and overnight incubation at −20°C. Afterwards, the sample was centrifuged 30 min at 5,000 g at 4°C, the pellet was washed with 5 mL of an 80% ice‐cold ethanol solution (stored at −20°C) and centrifuged for 5 min at 10,000 g. The supernatant was discarded and the pellet was dried overnight at 30°C. Finally, all DNA pellets obtained for one replicate were pooled and dissolved in 500 μL milli‐Q water. Subsequently, the sample was applied on a 100 kDa Amicon filter unit (Merck, Belgium) and centrifuged for 10 min at 14,000 g. The pellet was washed twice with 500 μL milli‐Q water. Finally, the pellet was eluted by centrifugation for 2 min at 1,500 g. DNA concentration was measured with the Quantifluor dsDNA sample kit (Promega, the Netherlands). Lab 1 used the protocol by Engel, Coyotzi, et al. based on the DNeasy® PowerMax® Soil Kit (Qiagen, Germany) with minor modifications. Briefly, 15 mL PowerBead solution was added to each bentonite pellet and the sample was vortexed for 1 min. After addition of 1.2 mL lysis solution, samples were vortexed vigorously for 30 s followed by vortexing in a horizontal vortex adapter at maximum speed for 10 min. Subsequently, the samples were placed in a shaking water bath at 65°C for 30 min. Afterwards, the manufacturer's protocol was followed and the DNA was eluted into 1 mL elution buffer (10 mM Tris). The extracted DNA was further purified and concentrated using the Genomic DNA Clean & Concentrator™ Kit (Zymo Research, USA) following the manufacturer's protocol to obtain 50 μL as the final volume. The extracted DNA was subsequently quantified using a Qubit 2.0 fluorometer (Invitrogen, Life Technologies, USA) according to the manufacturer's protocol. Lab 2 performed the extractions following the optimized protocol for total DNA isolation from bentonite as previously described by Povedano‐Priego et al. . The bentonite pellet obtained after centrifugation was distributed in portions of 0.3 g in individual 2‐mL screw‐cap micro‐centrifuge tubes. This protocol consists of a pre‐treatment using 400 μL Na 2 HPO 4 (0.12 M, pH 8.0) followed by chemical and enzymatic lysis by the addition of 600 μL of lysis buffer (100 mM Tris–HCl [pH 8.0], 100 mM EDTA [pH 8.0], 100 mM NaCl, 1% PVP and 2% SDS), 24 μL freshly made lysozyme (10 mg/mL) and 2.5 μL proteinase K (20 mg/mL) to each tube. Mechanical lysis was performed twice using a FastPrep® FP120 (MP Biomedicals) bead‐beater at 5.5 m s −1 for 45 s. Afterwards, samples were incubated at 37°C for 30 min first and then at 60°C for 1 h. Then, samples were centrifuged at 14,000 g for 5 min and all supernatants for the same sample were pooled in a 15 mL tube. An additional mechanical lysis step was performed with the bentonite pellet using 1 mL lysis buffer, followed by another centrifugation step. One volume of phenol:chloroform:isoamyl alcohol (25:24:1 v/v) was added to the tubes and centrifuged at 1500 g for 10 min at 4°C. This step was followed by a modification of the protocol described by Povedano‐Priego et al. : the supernatants were transferred to a new tube and one volume of chloroform was added and mixed. Tubes were again centrifuged. Afterwards, the next steps were performed following the extraction method in Povedano‐Priego et al. . Total DNA was resuspended in 35 μL milli‐Q water and quantified on a Qubit 3.0 Fluorometer (Life Technologies) and stored at −20°C until further processing. Lab 3 followed the protocol of Lab 1 with some minor modifications. After incubation in a water bath at 65°C for 30 min, samples were homogenized by vortexing the tubes 10 min at maximum speed with a Vortex adapter cat 13000‐V1 (Qiagen, the Netherlands). Then, we followed the same protocol until the elution step. Purified DNA was eluted in 2.3 mL of the provided elution buffer (10 mM Tris). Nucleic acids were precipitated using 4 μL/mL Genelute‐LPA (25 mg/mL; Sigma‐Aldrich, Belgium), 0.1 volumes of 5 M NaCl, 1 volume of isopropanol, gently mixed by inverting the tubes and stored at −20°C overnight. Precipitated DNA was pelleted by centrifugation at 13,000 g for 30 min at 4°C and then washed with 80% ice‐cold ethanol (stored at −20°C). Finally, pellets were air‐dried in a laminar flow for 15 min and finally suspended in 125 μL of elution buffer (10 mM Tris). DNA concentration was measured with the Quantifluor dsDNA sample kit (Promega, the Netherlands). The procedure of Povedano‐Priego et al. was followed with some modifications. Mechanical lysis was performed using a TissueLyser II (Qiagen, Belgium) for 10 min at 30 Hz. In addition, after the extraction with phenol:chloroform:isoamyl alcohol (25:24:1 v/v), the upper (aqueous) phase was transferred to a new tube and washed by adding one volume of chloroform:isoamyl alcohol (1:1 v/v). Tubes were again centrifuged at 1,500 g for 10 min at 4°C and the supernatants were transferred to a new tube. Afterwards, DNA was precipitated by adding 1 volume of 75% isopropanol and 1/10 volume of 3 M sodium acetate (pH 5.3) and overnight incubation at −20°C. Afterwards, the sample was centrifuged 30 min at 5,000 g at 4°C, the pellet was washed with 5 mL of an 80% ice‐cold ethanol solution (stored at −20°C) and centrifuged for 5 min at 10,000 g. The supernatant was discarded and the pellet was dried overnight at 30°C. Finally, all DNA pellets obtained for one replicate were pooled and dissolved in 500 μL milli‐Q water. Subsequently, the sample was applied on a 100 kDa Amicon filter unit (Merck, Belgium) and centrifuged for 10 min at 14,000 g. The pellet was washed twice with 500 μL milli‐Q water. Finally, the pellet was eluted by centrifugation for 2 min at 1,500 g. DNA concentration was measured with the Quantifluor dsDNA sample kit (Promega, the Netherlands). amplicon sequencing All DNA samples were sent to Lab 3 where all the PCRs were performed to minimize variability that could be introduced by that step. The V3‐V4 region of the 16S rRNA gene was amplified with primers 341F (5′‐CCTACGGGNGGCWGCAG‐3′) and 785R (5′‐GGACTACHVGGGTATCTAATCC‐3′) (Klindworth et al., ) using the Phusion High‐Fidelity Polymerase (Thermofisher Scientific, Belgium). Primers contained an Illumina adapter overhang sequence: 5′‐TCGTCGGCAGCGTCAGATGTGTATAAGAGACAG‐3′ for the forward primer and 5′‐GTCTCGTGGGCTCGGAGATGTGTATAAGAGACAG‐3′ for the reverse primer. PCR conditions were as follows: 1 min at 98°C followed by 30 cycles of 10 s at 98°C, 30 s at 62°C and 1 min at 72°C followed by a final extension of 10 min at 72°C. Five nanogram was used as DNA template in all samples except when the concentration was too low, where 5 μL was used. Initially, 70 samples were used for PCR amplification, including 48 samples spiked with a Mock community, 12 sterile bentonite samples, two negative kit controls, two no‐template PCR controls (NTC) and three replicates of each Mock community standard consisting of DNA instead of intact cells, processed according to the manufacturer's recommendations (Table ; ZymoBIOMICS Mock Community standards D6306 and D6311, Zymo Research Corporation, Irvine, USA) to examine possible PCR bias. PCR results were evaluated via gel electrophoresis by loading 5 μL (at least 250 ng) of each sample onto a 1% agarose gel. PCR products from samples that were positive were purified with the Wizard® SV Gel and PCR Clean‐Up System (Promega, The Netherlands) according to the manufacturer's protocol. Samples with a DNA yield above the detection limit but with a negative result after PCR amplification were further purified by either heating to 50°C for 1 h, diluting 20, 40 or 80 times, drop dialysis or on a 100‐kDa Amicon® Ultra filter device (Merck, Belgium). For drop dialysis, a standard Petri dish was half‐filled with milli‐Q water. A nitrocellulose membrane (pore size 0.025 μm, diameter 25 mm, Merck, Belgium) was floated on the water. The aliquot of the DNA sample was pipetted on the membrane and left to dialyze for 1 h. Afterwards, the sample was recovered from the top of the membrane. Purification on a 100‐kDa Amicon® Ultra filter device (Merck, Belgium) was performed by applying the sample on the column and centrifuging it for 10 min at 14,000 g. The sample was washed twice with 500 μL milli‐Q water. To recover the concentrated sample, the Amicon® filter was placed upside down in a clean microcentrifuge tube and centrifuged for 2 min at 1,500 g. All samples were sequenced on the Illumina MiSeq platform according to the manufacturer guidelines at BaseClear B.V (the Netherlands). Primers were first removed from the 16S rRNA gene amplicon sequencing data using cutadapt (Martin, ). Subsequently, raw reads were processed according to the DADA2 pipeline with recommended settings (Callahan et al., ). Briefly, reads with ambiguous, poor‐quality bases and more than two expected errors were discarded. The paired reads were merged, chimeras were identified and removed. Only amplicon sequence variants (ASV) with more than two reads were retained. Taxonomy was assigned to the ASVs using the naive Bayesian classifier method implemented in DADA2 with the Silva taxonomic training dataset (version 132) as a reference (Callahan, ). Potential contaminant ASVs were identified through the Decontam (v.1.6.0) R package (Davis et al., ). Specifically, the ‘combined’ method was used, where frequency and prevalence probabilities are combined with Fisher's method and used to identify contaminants. For samples with a DNA concentration below the detection limit but a positive sequencing result, the detection limit of the DNA measurement method was used to calculate the total amount of DNA in the sample. With a probability threshold of 0.5, we identified 10 contaminant ASVs and excluded them from subsequent analyses. The 16S rRNA amplicon sequencing data were further analysed in R version 4.3.0 with the R package phyloseq (McMurdie & Holmes, ). Subsampling was performed based on the lowest number of reads obtained over the different samples amended with a Mock community, that is, a coverage of 8403 reads. Rarefaction curves indicate that this level of subsampling adequately represented the bacterial diversity in the samples (Figure ). The package chkMocks was used to compare the composition obtained in each condition with the theoretically expected composition (Sudarshan et al., ). The β‐diversity was calculated by non‐metric multidimensional scaling (NMDS) with Bray‐Curtis distances with the command ‘ordinate’. Afterwards, a distance matrix of these data was calculated with the command ‘distance’. This distance matrix was used to perform a permutation test for homogeneity of multivariate dispersions using the command ‘betadisper’ in the package vegan (Oksanen et al., ). Permutational multivariate analysis of variance (PERMANOVA) using the ‘adonis’ at 999 permutations and α = 0.05 were performed to test whether there was a difference between the DNA extraction methods and if the bentonite had an effect on the outcome. Pairwise multilevel comparison on microbial community structure between samples was performed with the R package pairwiseAdonis with FDR (False Discovery Rate) correction to the p ‐values. The datasets generated and analysed during this study are available in the NCBI Sequence Read Archive (SRA) repository (PRJNA1054184). DNA yield The theoretically expected total DNA yield should have been ca. 2 μg and 200 ng of DNA for all samples amended with Mock 1 and Mock 2, respectively, if the DNA extraction protocol recommended by ZymoResearch (i.e., ZymoBIOMICS™ DNA Miniprep [D4300]) was applied. Each laboratory successfully obtained quantifiable DNA concentrations from all samples spiked with a Mock community, except for Lab 1 when analysing bentonite samples spiked with Mock 2. However, the results show that even in the absence of bentonite, DNA yields were lower than half of the expected amount. In the presence of bentonite, this difference further increased, except for DNA extracted by the phenol‐chloroform method from lab 3 where bentonite did not adversely affect the efficiency of DNA extraction. However, a large variation among the replicates was observed (Figure ). DNA yields from kit‐based methods employed by two labs differed with and without bentonite. Lab 3 obtained 10 times more DNA from Mock 1, and in the presence of bentonite and also for all samples spiked with Mock 2, an 80‐fold difference was observed. Minor differences were observed when using the phenol‐chloroform‐based method for Mock‐only sample extractions, but similarly high differences were observed in spiked bentonite samples where lab 3 achieved a notably higher total DNA yield than lab 2 (ca. 18 and 5 times more for samples spiked with Mock 1 and Mock 2, respectively). It is worth mentioning that the kit‐based extraction method failed to yield any measurable amount of DNA from the sterile unspiked bentonite samples. Conversely, the phenol‐chloroform‐based method successfully extracted DNA from five of the six replicates of sterile unspiked samples (Figure ). The large difference between the kit‐based methods particularly in the absence of bentonite may result from the following four factors. First, lab 3 applied a vortexing step after incubation in the water bath. Second, lab 3 used a larger volume of elution buffer (2.3 mL) than lab 1 (1 mL). Third, lab 3 used precipitation methods with LPA for DNA concentration, whereas lab 1 used the Genomic DNA Clean & Concentrator™ Kit. Finally, the volume used for the final elution was 50 μL in case of lab 1 and 125 μL in case of lab 3. To elucidate the effect of each of these differences, lab 3 conducted different combinations of kit‐based extraction protocols on Mock 1 without bentonite. In first instance, we investigated the effect of the additional vortex step of lab 3 and the increased volume (2.3 mL instead of 1 mL) at the intermediate elution. The total amount of DNA was measured after the intermediate elution before the additional cleaning of the DNA. Including the vortexing step in the procedure of lab 1 or removing it from the protocol of lab 3 did not increase DNA yield. However, eluting in 2.3 mL instead of 1 mL increased the amount of DNA matching results observed with the procedure of lab 3 (Figure ). Next, the protocol of lab 1 was performed with an intermediate elution in 2.3 instead of 1 mL and a final elution in 50 and 125 μL. This indicated that upscaling of the volumes seems to be the most explicatory factor that explains the difference in DNA yield between lab 1 and lab 3. Finally, as using a complete kit‐based approach is less time‐consuming and is expected to be more standardized and reproducible, we combined the kit‐based protocol of lab 3 with the Genomic DNA Clean & Concentrator™ Kit (Zymo Research, USA) of lab 1 and eluted in 125 μL. This also resulted in similar yields (Figure ). It is noteworthy that when Lab 1's procedure was replicated in Lab 3 even by the same individual, the DNA yield was five times higher than when conducted in Lab 1, highlighting the potential impact of differences in lab equipment (Figures and ). PCR Initially, up to 62 DNA samples were applied for PCR amplification, but the samples used for the protocol optimizations described in the previous paragraph were not included. PCR was successful for all 15 samples originating from lab 1 (kit‐based method), including both negative kit control samples and the sterile unspiked bentonite samples (Table ). Similarly, 16 of the 17 samples from lab 3 using the kit‐based approach were positive, including two unspiked bentonite samples. Only three of the 15 samples from lab 2 using the phenol‐chloroform approach were positive, namely the three replicates of Mock 1 without bentonite. Finally, nine of the 15 samples of lab 3 processed with the phenol‐chloroform method were positive (Table ). In addition to the 62 samples, we included two NTC in the PCR reaction. It is important to note that these NTC samples also yielded a band. Finally, we included PCR reactions on three replicates of each ZymoBIOMICS Microbial Community DNA Standard (Zymo Research Corporation, Irvine, USA). Samples with a DNA yield above the detection limit but with a negative result after PCR amplification underwent additional purifications. The three replicates of bentonite spiked with Mock 2 extracted with the phenol‐chloroform‐based method by lab 3 underwent additional purification through drop dialysis, followed by diluting the purified samples 40 times. This resulted in positive PCR amplification for two out of three replicates after a 40‐times dilution. As the samples of lab 2 visually differed from the other samples (Figure ), attempts were made to improve the PCR results by first heating the samples to 50°C and diluting them 20, 40 or 80 times. However, these measures did not yield a positive PCR reaction. Therefore, samples were purified by other methods. At first, since the phenol‐chloroform extraction method differed between labs 2 and 3 from the phenol‐chloroform step onwards, the DNA extraction from the pellet was repeated starting from there. This resulted in a successful amplification of three samples: one replicate of bentonite spiked with Mock 1, one replicate of Mock 2 and one replicate of the sterile bentonite samples. Subsequently, an additional purification step was performed using drop dialysis, enabling the amplification of the remaining two replicates of Mock 2. The other seven samples of lab 2 remained negative (Table ). As all these purification steps could affect the outcome of further analysis, we included additional controls where possible. To this end, Mock 1 samples of lab 2 that were already positive after the first PCR, were also purified starting from the phenol‐chloroform step onwards and again amplified to compare both results. In addition, to check the effect of the dilution, the two replicates of Mock 2 processed by lab 2 that were positive after drop dialysis, were also diluted 40 times, amplified again and the result was compared to the undiluted sample. In summary, successful PCR was achieved for a total of 41 out of 48 samples spiked with a mock community, 6 of the 12 sterile bentonite samples and the 2 negative kit controls. An overview of all samples, DNA yields and the measures needed to obtain a positive PCR reaction are given in Table . All these samples, together with the PCR reactions performed on three replicates of each ZymoBIOMICS Microbial Community DNA Standard (Zymo Research Corporation, Irvine, USA), 2 NTC PCR controls, controls to assess the impact of the additional phenol‐chloroform extraction, drop dialysis and dilution, totalling 66 samples, were sent for 16S rRNA amplicon sequencing. 16S rRNA amplicon sequencing In both NTC (lab 3) and the two kit controls (lab 1), Cupriavidus was the predominant genus, constituting over 99% of the total relative abundance (Supplementary Figure ). In fact, 10 ASVs were identified as true contaminants and were removed from the dataset in all subsequent analyses (Table ). After eliminating the contaminants, a total of 123 ASVs comprising 59 genera were identified. In most of the samples spiked with a mock community, only the eight genera present in the Mock were identified (Figure ). However, there is a discrepancy between the number of identified ASVs and the expected number (Figure ), which could be mainly attributed to the fact that several genera were defined by multiple ASVs. Nevertheless, the most abundant ASVs assigned to each genus were consistent across all samples (Figure ). In the unspiked sterile bentonite samples, contaminants constituted more than 96% of the total reads, reaching over 99% in four of six samples (Figure ). Many spurious ASV were identified collectively representing only between 0.4% to 3.6% of the total relative abundance. To assess potential PCR bias, we included controls with mock community standards composed of DNA instead of intact cells (Zymo Research Corporation, Irvine, USA). Sequencing results of these controls revealed minimal bias and demonstrated only minor variations across the different replicates (Figure ). This was confirmed by high Spearman's correlation coefficients comparing the samples with the theoretical composition (Figure ). The logarithmic distribution in Mock 2 allowed us to establish the detection limit. Our findings indicate that Lactobacillus , theoretically present with a relative abundance of 0.012% could be reliably identified in two out of three replicates. However, strains with lower distributions such as Enterococcus (0.001%) and Staphylococcus (0.0001%) remained undetectable (Figure ). Spearman's correlation coefficients were slightly lower compared to those obtained with DNA of Mock 1, but the results still matched very well and were identical between replicates ( ρ = 0.728). Samples spiked with Mock 1 demonstrated that both the kit‐based and phenol‐chloroform‐based DNA extraction methods were successful in capturing all species present in the mock community (Figure ). Moreover, we observed minimal variation among replicates. However, higher variation was observed in the samples extracted with the phenol‐chloroform method (Figure ). Notably, the variation in L2‐processed samples without bentonite decreased after applying an additional phenol‐chloroform extraction according to the method of lab 3 (Figure ). The reason for the observed variation in samples processed by lab 3 in the presence of bentonite remains unclear. It is worth mentioning that the total DNA extracted from these samples also exhibited variation among the replicates and the relative abundance of ASVs identified as contaminants was 70% in one of the replicates (Figure ). In all other samples spiked with Mock 1, the contribution of contaminants was very low. These observations were further supported by Spearman's correlation coefficients, which were compared to the theoretical composition (Figure ). The highest correlations were observed for lab 3 regardless of the extraction method employed. Importantly, the presence of bentonite did not adversely affect the correlation with the theoretical composition. The samples spiked with Mock 2 also showed minor variations among the different replicates. In addition, Spearman's correlation coefficients exhibited similarity across all conditions, with all samples achieving values above 0.6. However, the relative abundance of contaminant ASVs was generally much higher compared to the samples spiked with Mock 1, especially in the presence of bentonite. Important to note is that diluting the samples to mitigate the impact of PCR inhibitors could lead to a significant increase in the presence of contaminants (Figure ). As a rank‐based approach such as Spearman's correlation might not be suitable when only one or few strains are dominant, we also used non‐metric multidimensional scaling (NMDS) based on Bray Curtis distances to evaluate the diversity among the samples spiked with Mock 1 or Mock 2 between DNA extraction approaches (Figure ). Most replicates are located closely together, except for bentonite spiked with Mock 1 processed by lab 3 with the phenol: chloroform‐based approach (L3_phe). Overall, this confirmed limited variability among replicates. Furthermore, samples without bentonite grouped better than samples with bentonite (Figure ). To statistically evaluate the impact of the DNA extraction method on the results, a PERMANOVA analysis was conducted on samples spiked with Mock 1 or Mock 2, including the DNA mock. Only samples without bentonite were included and PERMANOVA was performed for each mock community separately (Table ). Prior to analysis, homogeneity of multivariate dispersions was assessed to ensure comparable variability among each group. The analysis revealed a significant effect of the DNA extraction method for both mock communities ( p = 0.017 in case of Mock 1 and p = 0.003 in case of Mock 2). However, detailed pairwise comparisons between each group (each extraction method, DNA mock and theoretical composition) indicated no significant difference in microbial composition obtained by both methods or between each method and the theoretical expected composition. The only distinction observed was between the composition of the DNA mock and the samples extracted by the phenol‐chloroform‐based method spiked with Mock 1 and in samples spiked with Mock 2 (Table ). Additionally, to test the effect of bentonite on the performance of the two extraction methods, we first included all samples with and without bentonite (regardless of the DNA extraction method) and performed PERMANOVA on samples spiked with Mock 1 or with Mock 2. However, for Mock 1, a permutation test for homogeneity of multivariate dispersions was significant; thus no further PERMANOVA analysis could be conducted. Instead, we evaluated the effect of bentonite on the performance of extraction methods by comparing all samples with and without bentonite processed with the kit‐based extraction or with the phenol‐chloroform extraction (Table ). This showed a significant effect of bentonite on each extraction method regardless of the Mock used (Table ). However, detailed pairwise comparison of samples with and without bentonite and the theoretical Mock composition performed independently for each extraction method and Mock type were mostly non‐significant. The only significant difference between samples with and without bentonite was detected in case of Mock 1 samples processed using the kit‐based extraction ( p = 0.025) and Mock 2 samples processed using the phenol‐chloroform extraction ( p = 0.025). yield The theoretically expected total DNA yield should have been ca. 2 μg and 200 ng of DNA for all samples amended with Mock 1 and Mock 2, respectively, if the DNA extraction protocol recommended by ZymoResearch (i.e., ZymoBIOMICS™ DNA Miniprep [D4300]) was applied. Each laboratory successfully obtained quantifiable DNA concentrations from all samples spiked with a Mock community, except for Lab 1 when analysing bentonite samples spiked with Mock 2. However, the results show that even in the absence of bentonite, DNA yields were lower than half of the expected amount. In the presence of bentonite, this difference further increased, except for DNA extracted by the phenol‐chloroform method from lab 3 where bentonite did not adversely affect the efficiency of DNA extraction. However, a large variation among the replicates was observed (Figure ). DNA yields from kit‐based methods employed by two labs differed with and without bentonite. Lab 3 obtained 10 times more DNA from Mock 1, and in the presence of bentonite and also for all samples spiked with Mock 2, an 80‐fold difference was observed. Minor differences were observed when using the phenol‐chloroform‐based method for Mock‐only sample extractions, but similarly high differences were observed in spiked bentonite samples where lab 3 achieved a notably higher total DNA yield than lab 2 (ca. 18 and 5 times more for samples spiked with Mock 1 and Mock 2, respectively). It is worth mentioning that the kit‐based extraction method failed to yield any measurable amount of DNA from the sterile unspiked bentonite samples. Conversely, the phenol‐chloroform‐based method successfully extracted DNA from five of the six replicates of sterile unspiked samples (Figure ). The large difference between the kit‐based methods particularly in the absence of bentonite may result from the following four factors. First, lab 3 applied a vortexing step after incubation in the water bath. Second, lab 3 used a larger volume of elution buffer (2.3 mL) than lab 1 (1 mL). Third, lab 3 used precipitation methods with LPA for DNA concentration, whereas lab 1 used the Genomic DNA Clean & Concentrator™ Kit. Finally, the volume used for the final elution was 50 μL in case of lab 1 and 125 μL in case of lab 3. To elucidate the effect of each of these differences, lab 3 conducted different combinations of kit‐based extraction protocols on Mock 1 without bentonite. In first instance, we investigated the effect of the additional vortex step of lab 3 and the increased volume (2.3 mL instead of 1 mL) at the intermediate elution. The total amount of DNA was measured after the intermediate elution before the additional cleaning of the DNA. Including the vortexing step in the procedure of lab 1 or removing it from the protocol of lab 3 did not increase DNA yield. However, eluting in 2.3 mL instead of 1 mL increased the amount of DNA matching results observed with the procedure of lab 3 (Figure ). Next, the protocol of lab 1 was performed with an intermediate elution in 2.3 instead of 1 mL and a final elution in 50 and 125 μL. This indicated that upscaling of the volumes seems to be the most explicatory factor that explains the difference in DNA yield between lab 1 and lab 3. Finally, as using a complete kit‐based approach is less time‐consuming and is expected to be more standardized and reproducible, we combined the kit‐based protocol of lab 3 with the Genomic DNA Clean & Concentrator™ Kit (Zymo Research, USA) of lab 1 and eluted in 125 μL. This also resulted in similar yields (Figure ). It is noteworthy that when Lab 1's procedure was replicated in Lab 3 even by the same individual, the DNA yield was five times higher than when conducted in Lab 1, highlighting the potential impact of differences in lab equipment (Figures and ). Initially, up to 62 DNA samples were applied for PCR amplification, but the samples used for the protocol optimizations described in the previous paragraph were not included. PCR was successful for all 15 samples originating from lab 1 (kit‐based method), including both negative kit control samples and the sterile unspiked bentonite samples (Table ). Similarly, 16 of the 17 samples from lab 3 using the kit‐based approach were positive, including two unspiked bentonite samples. Only three of the 15 samples from lab 2 using the phenol‐chloroform approach were positive, namely the three replicates of Mock 1 without bentonite. Finally, nine of the 15 samples of lab 3 processed with the phenol‐chloroform method were positive (Table ). In addition to the 62 samples, we included two NTC in the PCR reaction. It is important to note that these NTC samples also yielded a band. Finally, we included PCR reactions on three replicates of each ZymoBIOMICS Microbial Community DNA Standard (Zymo Research Corporation, Irvine, USA). Samples with a DNA yield above the detection limit but with a negative result after PCR amplification underwent additional purifications. The three replicates of bentonite spiked with Mock 2 extracted with the phenol‐chloroform‐based method by lab 3 underwent additional purification through drop dialysis, followed by diluting the purified samples 40 times. This resulted in positive PCR amplification for two out of three replicates after a 40‐times dilution. As the samples of lab 2 visually differed from the other samples (Figure ), attempts were made to improve the PCR results by first heating the samples to 50°C and diluting them 20, 40 or 80 times. However, these measures did not yield a positive PCR reaction. Therefore, samples were purified by other methods. At first, since the phenol‐chloroform extraction method differed between labs 2 and 3 from the phenol‐chloroform step onwards, the DNA extraction from the pellet was repeated starting from there. This resulted in a successful amplification of three samples: one replicate of bentonite spiked with Mock 1, one replicate of Mock 2 and one replicate of the sterile bentonite samples. Subsequently, an additional purification step was performed using drop dialysis, enabling the amplification of the remaining two replicates of Mock 2. The other seven samples of lab 2 remained negative (Table ). As all these purification steps could affect the outcome of further analysis, we included additional controls where possible. To this end, Mock 1 samples of lab 2 that were already positive after the first PCR, were also purified starting from the phenol‐chloroform step onwards and again amplified to compare both results. In addition, to check the effect of the dilution, the two replicates of Mock 2 processed by lab 2 that were positive after drop dialysis, were also diluted 40 times, amplified again and the result was compared to the undiluted sample. In summary, successful PCR was achieved for a total of 41 out of 48 samples spiked with a mock community, 6 of the 12 sterile bentonite samples and the 2 negative kit controls. An overview of all samples, DNA yields and the measures needed to obtain a positive PCR reaction are given in Table . All these samples, together with the PCR reactions performed on three replicates of each ZymoBIOMICS Microbial Community DNA Standard (Zymo Research Corporation, Irvine, USA), 2 NTC PCR controls, controls to assess the impact of the additional phenol‐chloroform extraction, drop dialysis and dilution, totalling 66 samples, were sent for 16S rRNA amplicon sequencing. amplicon sequencing In both NTC (lab 3) and the two kit controls (lab 1), Cupriavidus was the predominant genus, constituting over 99% of the total relative abundance (Supplementary Figure ). In fact, 10 ASVs were identified as true contaminants and were removed from the dataset in all subsequent analyses (Table ). After eliminating the contaminants, a total of 123 ASVs comprising 59 genera were identified. In most of the samples spiked with a mock community, only the eight genera present in the Mock were identified (Figure ). However, there is a discrepancy between the number of identified ASVs and the expected number (Figure ), which could be mainly attributed to the fact that several genera were defined by multiple ASVs. Nevertheless, the most abundant ASVs assigned to each genus were consistent across all samples (Figure ). In the unspiked sterile bentonite samples, contaminants constituted more than 96% of the total reads, reaching over 99% in four of six samples (Figure ). Many spurious ASV were identified collectively representing only between 0.4% to 3.6% of the total relative abundance. To assess potential PCR bias, we included controls with mock community standards composed of DNA instead of intact cells (Zymo Research Corporation, Irvine, USA). Sequencing results of these controls revealed minimal bias and demonstrated only minor variations across the different replicates (Figure ). This was confirmed by high Spearman's correlation coefficients comparing the samples with the theoretical composition (Figure ). The logarithmic distribution in Mock 2 allowed us to establish the detection limit. Our findings indicate that Lactobacillus , theoretically present with a relative abundance of 0.012% could be reliably identified in two out of three replicates. However, strains with lower distributions such as Enterococcus (0.001%) and Staphylococcus (0.0001%) remained undetectable (Figure ). Spearman's correlation coefficients were slightly lower compared to those obtained with DNA of Mock 1, but the results still matched very well and were identical between replicates ( ρ = 0.728). Samples spiked with Mock 1 demonstrated that both the kit‐based and phenol‐chloroform‐based DNA extraction methods were successful in capturing all species present in the mock community (Figure ). Moreover, we observed minimal variation among replicates. However, higher variation was observed in the samples extracted with the phenol‐chloroform method (Figure ). Notably, the variation in L2‐processed samples without bentonite decreased after applying an additional phenol‐chloroform extraction according to the method of lab 3 (Figure ). The reason for the observed variation in samples processed by lab 3 in the presence of bentonite remains unclear. It is worth mentioning that the total DNA extracted from these samples also exhibited variation among the replicates and the relative abundance of ASVs identified as contaminants was 70% in one of the replicates (Figure ). In all other samples spiked with Mock 1, the contribution of contaminants was very low. These observations were further supported by Spearman's correlation coefficients, which were compared to the theoretical composition (Figure ). The highest correlations were observed for lab 3 regardless of the extraction method employed. Importantly, the presence of bentonite did not adversely affect the correlation with the theoretical composition. The samples spiked with Mock 2 also showed minor variations among the different replicates. In addition, Spearman's correlation coefficients exhibited similarity across all conditions, with all samples achieving values above 0.6. However, the relative abundance of contaminant ASVs was generally much higher compared to the samples spiked with Mock 1, especially in the presence of bentonite. Important to note is that diluting the samples to mitigate the impact of PCR inhibitors could lead to a significant increase in the presence of contaminants (Figure ). As a rank‐based approach such as Spearman's correlation might not be suitable when only one or few strains are dominant, we also used non‐metric multidimensional scaling (NMDS) based on Bray Curtis distances to evaluate the diversity among the samples spiked with Mock 1 or Mock 2 between DNA extraction approaches (Figure ). Most replicates are located closely together, except for bentonite spiked with Mock 1 processed by lab 3 with the phenol: chloroform‐based approach (L3_phe). Overall, this confirmed limited variability among replicates. Furthermore, samples without bentonite grouped better than samples with bentonite (Figure ). To statistically evaluate the impact of the DNA extraction method on the results, a PERMANOVA analysis was conducted on samples spiked with Mock 1 or Mock 2, including the DNA mock. Only samples without bentonite were included and PERMANOVA was performed for each mock community separately (Table ). Prior to analysis, homogeneity of multivariate dispersions was assessed to ensure comparable variability among each group. The analysis revealed a significant effect of the DNA extraction method for both mock communities ( p = 0.017 in case of Mock 1 and p = 0.003 in case of Mock 2). However, detailed pairwise comparisons between each group (each extraction method, DNA mock and theoretical composition) indicated no significant difference in microbial composition obtained by both methods or between each method and the theoretical expected composition. The only distinction observed was between the composition of the DNA mock and the samples extracted by the phenol‐chloroform‐based method spiked with Mock 1 and in samples spiked with Mock 2 (Table ). Additionally, to test the effect of bentonite on the performance of the two extraction methods, we first included all samples with and without bentonite (regardless of the DNA extraction method) and performed PERMANOVA on samples spiked with Mock 1 or with Mock 2. However, for Mock 1, a permutation test for homogeneity of multivariate dispersions was significant; thus no further PERMANOVA analysis could be conducted. Instead, we evaluated the effect of bentonite on the performance of extraction methods by comparing all samples with and without bentonite processed with the kit‐based extraction or with the phenol‐chloroform extraction (Table ). This showed a significant effect of bentonite on each extraction method regardless of the Mock used (Table ). However, detailed pairwise comparison of samples with and without bentonite and the theoretical Mock composition performed independently for each extraction method and Mock type were mostly non‐significant. The only significant difference between samples with and without bentonite was detected in case of Mock 1 samples processed using the kit‐based extraction ( p = 0.025) and Mock 2 samples processed using the phenol‐chloroform extraction ( p = 0.025). The goal of this study was to assess the representativeness of DNA extraction from clay by comparing two methods (Engel et al., ; Povedano‐Priego et al., ). To address this, we conducted an inter‐laboratory comparison of the two methods, with slight modifications, using Wyoming MX‐80 bentonite spiked with two mock communities. We compared the obtained DNA yield and purity (no requirement for additional purification steps), the presence of contaminants and the community profile, in the context of the clay and the two methods considered. DNA yields in the absence of bentonite To detect possible differences between both extraction protocols and variability between the labs, we first focused on the extraction of DNA from both mock communities without bentonite. In these conditions, we observed no substantial variation in the DNA yield from samples processed with the phenol‐chloroform extraction. However, substantial differences were observed with the kit‐based extraction method (Figure ). We therefore carefully evaluated the differences in the performed extraction protocol and performed additional experiments targeting the role of each difference. Our data indicate that increasing the volume of the elution buffer (both intermediate and final elution) in the procedure of lab 1 to the same level as that of lab 3 increases the DNA yields to the same level for both labs. The yields obtained in this study were at least three times higher than what was observed by a recent study comparing different DNA extraction methods on Mock 1 (Spreckels et al., ). DNA yields in the presence of bentonite DNA extractions performed on the bentonite samples spiked with the mock community revealed that the presence of bentonite considerably hindered the efficiency of DNA extraction, except in samples processed by lab 3 using the phenol‐chloroform‐based extraction (Figure ). Clay particles are known to tightly adsorb organic and inorganic phosphorous compounds, including DNA (Cai et al., ). Consequently, this significantly hinders the efficiency of DNA extraction (Frostegård et al., ). A 60%–80% adsorption of DNA of single strains was observed by bentonite and its minerals, which results in very low or unmeasurable amounts of DNA (Engel et al., ; Pietramellara et al., ; Stone et al., ). In addition, high variation in DNA yields was observed when Opalinus Clay was spiked with Mock 1 and extracted by seven different methods (Mijnendonckx et al., ). Variation in DNA yields between the methods was also observed in this study. Overall, our results showed that in the presence of bentonite, the kit‐based procedure resulted in lower yields compared to the phenol‐chloroform extraction, particularly for Mock 2. Nevertheless, it consistently yielded highly pure DNA ready for downstream applications, as PCR amplification was successful in all cases. Similarly, a lower yield was obtained during kit‐based DNA extractions from bentonite (Engel, Coyotzi, et al., ) and in general when extracting DNA from environmental samples (Luna et al., ). Contrary, the kit‐based method resulted in the highest DNA yield when Opalinus Clay was spiked with Mock 1 (Mijnendonckx et al., ). Despite the higher DNA yields obtained with the phenol‐chloroform‐based method, several additional purification steps or dilutions were necessary before downstream applications were successful, especially in the case of lab 2 samples. We have not performed spectrophotometrical measurements (e.g., Nanodrop) to measure the purity of the samples as several samples had DNA concentrations that were too low to obtain reliable results. Instead, we scored the purity of the samples based on the possibility of amplifying them. In addition, it was visually evident which samples contained more impurities than others, depending on the extraction method used. We observed marked differences in yields for both protocols among the labs. In case of kit‐based extractions, DNA yields were considerably lower for lab 1 than lab 3 for both Mock 1 and Mock 2 (Figure ), which was consistent with the results of Mock‐only extractions. The lower yields may result from different elution volumes, as discussed above. The observed differences in purity and in obtained yields in case of the phenol‐chloroform extractions can potentially also be attributed to variations in the protocol employed. Lab 2 utilized a mixture of phenol:chloroform:isoamylalcohol followed by a washing step with pure chloroform, whereas lab 3 employed a mixture of chloroform and isoamylalcohol after the extraction with phenol:chloroform:isoamylalcohol. It has been shown previously that chloroform alone failed to produce a sharp interface between the aqueous and organic phase and a mixture of chloroform and isoamylalcohol is recommended (Lever et al., ). Additionally, lab 3 implemented additional washing steps with milli‐Q water after DNA precipitation. These two protocol modifications likely contributed to increased purity and improved efficiency in samples containing bentonite. Contaminants One of the common issues of microbial community composition analysis in low‐biomass samples, such as bentonite, is the presence of contaminants (Salter et al., ). Failure to include appropriate controls and the resulting unrevealed presence of contaminant sequences can lead to substantial bias in result interpretation. A DNA template of 5 ng was used for PCR amplification of all samples, because it has been demonstrated to enhance amplification reproducibility compared to a lower amount of DNA template (Kennedy et al., ). However, some samples had DNA concentrations below the detection limit, necessitating the use of a lower amount of DNA template. This was especially true for the samples spiked with Mock 2 where the strains exhibited a logarithmic distribution and the total number of spiked cells was 10 times lower, which resulted in much lower DNA yields compared to the samples spiked with Mock 1. Lowering of the input DNA clearly led to an increased presence of contaminants, which was also observed when bentonite was spiked with low amounts of DNA from a pure strain (Engel, Coyotzi, et al., ). A similar effect was also observed after sample dilution necessary for successful amplification in case of some phenol‐chloroform extracted samples and thus starting from a lower amount of DNA template, which resulted in a significantly higher relative abundance of contaminant ASVs. To distinguish possible contaminants, we included NTCs for the PCR reaction and negative controls for the kit extraction. This approach facilitated the identification and exclusion of contaminants and showed that the observed differences between the samples spiked with Mock 1 and Mock 2 could largely be explained by the relative abundance of ASVs identified as contaminants. Sequencing results revealed that although 6 out of 12 unamended irradiated samples yielded positive PCR results, more than 96% of the total reads (and over 99% in four out of six samples) corresponded to contaminants (Figure ). The remaining reads may have originated from dead microorganisms or spores, that persisted in the bentonite after irradiation but represented an insignificant fraction. Our findings validate that irradiation can be considered as an effective method for obtaining sterile bentonite samples, which can serve as negative controls in microbiome studies involving bentonite. Moreover, this study demonstrates the importance of vigilance regarding the presence of contaminant sequences in samples from low‐biomass environments where the obtained DNA concentration is often low. Microbial community composition Contrary to the observed variation in the total amount of DNA obtained from the samples, after the removal of the contaminant ASVs, the microbial community composition showed relatively limited variation. In the absence of bentonite, the two methods evidenced a consistent microbial community corresponding to the expected composition. This suggests that both methods are suitable for DNA extraction from a mixture of microbial strains with different characteristics. However, the presence of bentonite had a significant effect on the microbial composition of samples for both methods. This result might be due to the overall lower DNA yields from bentonite samples and highlights the possible negative impact of bentonite on extraction efficiency. Nevertheless, no differences compared to the theoretical composition were observed in bentonite samples extracted by both extraction methods and Spearman's correlation coefficients were similar in most samples independent of the presence of bentonite. In fact, Spearman's correlation coefficients were higher and more consistent among the samples spiked with Mock 2, regardless of the DNA extraction method employed. In Mock 2, one ASV is highly dominant so our results indicate that both methods are highly robust in conditions where only one (or a few) species is/are dominant. More variation was observed in samples spiked with Mock 1, where the species were more equally abundant. However, variability was more pronounced among different laboratories than it was dependent on the presence of bentonite. Consequently, both methods seem to provide a reliable representation of the actual microbial composition in the Mock samples spiked in the bentonite, despite the described differences. yields in the absence of bentonite To detect possible differences between both extraction protocols and variability between the labs, we first focused on the extraction of DNA from both mock communities without bentonite. In these conditions, we observed no substantial variation in the DNA yield from samples processed with the phenol‐chloroform extraction. However, substantial differences were observed with the kit‐based extraction method (Figure ). We therefore carefully evaluated the differences in the performed extraction protocol and performed additional experiments targeting the role of each difference. Our data indicate that increasing the volume of the elution buffer (both intermediate and final elution) in the procedure of lab 1 to the same level as that of lab 3 increases the DNA yields to the same level for both labs. The yields obtained in this study were at least three times higher than what was observed by a recent study comparing different DNA extraction methods on Mock 1 (Spreckels et al., ). yields in the presence of bentonite DNA extractions performed on the bentonite samples spiked with the mock community revealed that the presence of bentonite considerably hindered the efficiency of DNA extraction, except in samples processed by lab 3 using the phenol‐chloroform‐based extraction (Figure ). Clay particles are known to tightly adsorb organic and inorganic phosphorous compounds, including DNA (Cai et al., ). Consequently, this significantly hinders the efficiency of DNA extraction (Frostegård et al., ). A 60%–80% adsorption of DNA of single strains was observed by bentonite and its minerals, which results in very low or unmeasurable amounts of DNA (Engel et al., ; Pietramellara et al., ; Stone et al., ). In addition, high variation in DNA yields was observed when Opalinus Clay was spiked with Mock 1 and extracted by seven different methods (Mijnendonckx et al., ). Variation in DNA yields between the methods was also observed in this study. Overall, our results showed that in the presence of bentonite, the kit‐based procedure resulted in lower yields compared to the phenol‐chloroform extraction, particularly for Mock 2. Nevertheless, it consistently yielded highly pure DNA ready for downstream applications, as PCR amplification was successful in all cases. Similarly, a lower yield was obtained during kit‐based DNA extractions from bentonite (Engel, Coyotzi, et al., ) and in general when extracting DNA from environmental samples (Luna et al., ). Contrary, the kit‐based method resulted in the highest DNA yield when Opalinus Clay was spiked with Mock 1 (Mijnendonckx et al., ). Despite the higher DNA yields obtained with the phenol‐chloroform‐based method, several additional purification steps or dilutions were necessary before downstream applications were successful, especially in the case of lab 2 samples. We have not performed spectrophotometrical measurements (e.g., Nanodrop) to measure the purity of the samples as several samples had DNA concentrations that were too low to obtain reliable results. Instead, we scored the purity of the samples based on the possibility of amplifying them. In addition, it was visually evident which samples contained more impurities than others, depending on the extraction method used. We observed marked differences in yields for both protocols among the labs. In case of kit‐based extractions, DNA yields were considerably lower for lab 1 than lab 3 for both Mock 1 and Mock 2 (Figure ), which was consistent with the results of Mock‐only extractions. The lower yields may result from different elution volumes, as discussed above. The observed differences in purity and in obtained yields in case of the phenol‐chloroform extractions can potentially also be attributed to variations in the protocol employed. Lab 2 utilized a mixture of phenol:chloroform:isoamylalcohol followed by a washing step with pure chloroform, whereas lab 3 employed a mixture of chloroform and isoamylalcohol after the extraction with phenol:chloroform:isoamylalcohol. It has been shown previously that chloroform alone failed to produce a sharp interface between the aqueous and organic phase and a mixture of chloroform and isoamylalcohol is recommended (Lever et al., ). Additionally, lab 3 implemented additional washing steps with milli‐Q water after DNA precipitation. These two protocol modifications likely contributed to increased purity and improved efficiency in samples containing bentonite. One of the common issues of microbial community composition analysis in low‐biomass samples, such as bentonite, is the presence of contaminants (Salter et al., ). Failure to include appropriate controls and the resulting unrevealed presence of contaminant sequences can lead to substantial bias in result interpretation. A DNA template of 5 ng was used for PCR amplification of all samples, because it has been demonstrated to enhance amplification reproducibility compared to a lower amount of DNA template (Kennedy et al., ). However, some samples had DNA concentrations below the detection limit, necessitating the use of a lower amount of DNA template. This was especially true for the samples spiked with Mock 2 where the strains exhibited a logarithmic distribution and the total number of spiked cells was 10 times lower, which resulted in much lower DNA yields compared to the samples spiked with Mock 1. Lowering of the input DNA clearly led to an increased presence of contaminants, which was also observed when bentonite was spiked with low amounts of DNA from a pure strain (Engel, Coyotzi, et al., ). A similar effect was also observed after sample dilution necessary for successful amplification in case of some phenol‐chloroform extracted samples and thus starting from a lower amount of DNA template, which resulted in a significantly higher relative abundance of contaminant ASVs. To distinguish possible contaminants, we included NTCs for the PCR reaction and negative controls for the kit extraction. This approach facilitated the identification and exclusion of contaminants and showed that the observed differences between the samples spiked with Mock 1 and Mock 2 could largely be explained by the relative abundance of ASVs identified as contaminants. Sequencing results revealed that although 6 out of 12 unamended irradiated samples yielded positive PCR results, more than 96% of the total reads (and over 99% in four out of six samples) corresponded to contaminants (Figure ). The remaining reads may have originated from dead microorganisms or spores, that persisted in the bentonite after irradiation but represented an insignificant fraction. Our findings validate that irradiation can be considered as an effective method for obtaining sterile bentonite samples, which can serve as negative controls in microbiome studies involving bentonite. Moreover, this study demonstrates the importance of vigilance regarding the presence of contaminant sequences in samples from low‐biomass environments where the obtained DNA concentration is often low. Contrary to the observed variation in the total amount of DNA obtained from the samples, after the removal of the contaminant ASVs, the microbial community composition showed relatively limited variation. In the absence of bentonite, the two methods evidenced a consistent microbial community corresponding to the expected composition. This suggests that both methods are suitable for DNA extraction from a mixture of microbial strains with different characteristics. However, the presence of bentonite had a significant effect on the microbial composition of samples for both methods. This result might be due to the overall lower DNA yields from bentonite samples and highlights the possible negative impact of bentonite on extraction efficiency. Nevertheless, no differences compared to the theoretical composition were observed in bentonite samples extracted by both extraction methods and Spearman's correlation coefficients were similar in most samples independent of the presence of bentonite. In fact, Spearman's correlation coefficients were higher and more consistent among the samples spiked with Mock 2, regardless of the DNA extraction method employed. In Mock 2, one ASV is highly dominant so our results indicate that both methods are highly robust in conditions where only one (or a few) species is/are dominant. More variation was observed in samples spiked with Mock 1, where the species were more equally abundant. However, variability was more pronounced among different laboratories than it was dependent on the presence of bentonite. Consequently, both methods seem to provide a reliable representation of the actual microbial composition in the Mock samples spiked in the bentonite, despite the described differences. We conducted an inter‐laboratory comparison of two DNA extraction methods on MX‐80 bentonite. Our results, along with previously published evidence, clearly demonstrate that even minor changes in extraction protocols can significantly affect both the efficiency and purity of the extracted DNA. Differences in DNA yield based on the choice of extraction method can be important, especially since bentonite is typically a low‐biomass environment. Importantly, our findings indicate that the choice between the two methods is not critical, as each has advantages. However, retaining consistency in the chosen method is essential, as comparing results becomes challenging, particularly in the presence of bentonite. In general, the kit‐based method with an intermediate elution in 2.3 mL and further purification with the Genomic DNA Clean & Concentrator™ Kit and a final elution in 125 μL is the preferred procedure, as it results in highly pure DNA and is the least time‐consuming. However, pooling a large number of samples might be necessary to obtain sufficient DNA using this method if very low biomass is present. In such cases, the phenol‐chloroform‐based method appears to be the optimal choice as it yields a higher amount of DNA. However, this method is more time‐consuming and may be more susceptible to impurities in the final DNA sample and technical variations. It is recommended to wash with a mixture of chloroform and isoamyl alcohol and to implement additional washing steps after DNA precipitation to obtain amplifiable DNA. Lastly, our findings emphasize the importance of including appropriate controls when working with challenging samples, particularly those with low biomass. Kristel Mijnendonckx: Conceptualization; methodology; funding acquisition; visualization; writing – review and editing; writing – original draft; formal analysis; data curation; project administration. Carla Smolders: Investigation; methodology. Deepa Bartak: Investigation; methodology; writing – review and editing; writing – original draft. Trung Le Duc: Methodology; investigation. Mar Morales‐Hidalgo: Investigation; methodology; writing – review and editing. Cristina Povedano‐Priego: Investigation; methodology; writing – original draft; writing – review and editing. Fadwa Jroundi: Methodology; supervision; writing – review and editing. Mohamed L. Merroun: Funding acquisition; writing – review and editing; supervision; resources. Natalie Leys: Writing – review and editing; funding acquisition; resources; supervision. Katerina Cerna: Conceptualization; investigation; methodology; writing – review and editing; writing – original draft; supervision; resources; funding acquisition. The authors declare no conflicts of interest. Data S1. Supporting information. |
Lower amygdala and hippocampal volume correlates with TDP‐43 load in limbic‐predominant age‐related TDP‐43 encephalopathy (LATE) | 4bbcfe81-07e6-4274-893c-afdac4a65a82 | 11714535 | Forensic Medicine[mh] | |
On the utilization of polygenic risk scores for therapeutic targeting | b9907daf-d6d0-492d-b844-ad21d63d7c22 | 6483161 | Preventive Medicine[mh] | Over 80% of global consumption of prescription opioid pain killers (drugs that bind and act through one of the opioid receptors) is in the United States, which accounts for just under 5% of the world’s population . It is estimated that almost 100 million US residents took an opioid last year, with between 5% and 10% of these people likely to become addicted or at least transition to opioid use disorder (OUD). This in turn helps explain the fact that upwards of 50,000 people died of a drug overdose in 2016, or 2% of all deaths in the US. An increasing proportion of this is now attributed to fentanyl, a synthetic drug initially developed for surgical sedation, which is 50 times more potent than heroin and is available off prescription (on the street) or laced into counterfeit prescription drugs. A microscopic increase in the dosage of fentanyl in one bad pill can be lethal to unsuspecting addicts just trying to cope with chronic pain . There are three potential targets for prediction of opioid response. One is identification of individuals at high risk of addiction; another is identification of poor metabolizers, who have reduced conversion of ingested drug into a bioactive form like morphine and hence are not receiving analgesic benefits; and conversely, the third is identification of ultrarapid metabolizers, who produce so much active drug that they are at risk of serious adverse events, including respiratory depression, severe nausea and constipation, and dysphoria . Guidelines for dosing are now provided by the Clinical Pharmacogenetic Implementation Consortium (CPIC) based on the known large effect of allelic variation at the CYP2D6 locus on metabolism of at least codeine and tramadol and possibly oxycodone and hydrocodone . Approximately 7% of the population fall into either the poor or ultrarapid metabolizer categories and may benefit from genetic assessment, but there are over 70 known SNP variants and copy-number variations, so genotyping is currently only performed at fewer than 10 medical centers in the US. Prediction of OUD or addiction is a much more difficult problem . Variants in the OPRM1 gene, which encodes the μ opioid receptor expressed in the central nervous system, have been repeatedly associated with measures of response to pain medication, but these findings were from small studies and did not reach anything close to genome-wide significance . Nevertheless, the San Diego company Proove Biosciences developed a test based on 11 SNPs broadly associated with addiction, mostly in neurotransmitter receptors and transporters, but the company was forced into receivership in August of 2017 after facing charges of kickbacks and over legitimate concerns surrounding the validity of the aggressively marketed test . Extrapolation of their sensitivity analysis on the assumption of 8% prevalence of OUD implies a precision of just 30%: in other words, the majority of the small percentage of patients recommended not to take opioids would actually not be at high risk of addiction. By contrast, if a negative predictor could identify two-thirds of the population with half the risk of progression, allowing the social services and other health workers to give more resources and attention to monitoring of the one-third of higher-risk individuals, then successful intervention in just half of these cases would reduce the OUD rate by 25%. Alternatively, if the aim is to generate a test with a precision of at least 50%, then targeting 5% of the population would require a score defining this group with an odds ratio of 8.7, and highly effective treatment would also reduce OUD by 25% (see ). It is remarkable that no genome-wide association study (GWAS) for opioid addiction based on hospital records has yet been reported and conceivable that phenome-wide association studies (PheWASs) on a million patients will have the requisite power for development of a PRS. One billion people worldwide, including 75 million US residents, suffer from hypertension. It is well established that above 115 mmHg of systolic blood pressure (SBP), every increase of 20 mmHg doubles the risk of cardiovascular disease (CVD). There is much debate over guidelines, but the 8th Joint National Committee (JNC) report called for a combination of lifestyle changes and pharmaceutical intervention—generally including a thiazide diuretic and possibly one or more of an angiotensin converting enzyme (ACE) inhibitor, or a blocker of the angiotensin receptor, beta-adrenoreceptor, or calcium channels—to keep SBP below 150, particularly in elderly persons over the age of 60. Formerly , this target was 140, or even 130 in patients with chronic kidney disease or diabetes, as there is strong evidence that doing so almost halves the risk of cardiovascular death. As of December 2017, the American Heart Association (AHA) now regards almost half of all US residents as in need of reducing their blood pressure . A randomized control trial involving over 9,000 patients evaluated the effectiveness of intensive therapy with an average of almost three drugs targeting SBP less than 120 mmHg compared with standard treatment with an average of two drugs targeting SBP less than 140 in elderly individuals with incident or preclinical heart disease (generally overweight and with a high Framingham risk score). It was stopped after just 4 years because of a clear benefit of intensive therapy showing a 25% reduction in primary end points (myocardial infarction, stroke, heart failure) from 2.19% to 1.65% events per year. Between one-quarter and one-third of these events led to cardiovascular deaths, which were also significantly reduced. Follow-up analyses also revealed that there was no difference between the treatment arms in self-perception of effectiveness of medication (which can either be taken to mean that there is not likely to be any obstacle to adherence to intensive treatment or that there is no clear benefit to it in terms of general daily health). Furthermore, economic microsimulation showed that the average additional cost of intensive therapy of approximately US$1,000 per year, or around just 5% of the annual total cost of care for these individuals, is well below the “acceptable” threshold of US$50,000 per quality additional life-year (QALY) gained , given that median life expectancy is 12 years and an average extension of 3 months was estimated from the data. Such data argue for general introduction of intensive blood pressure control in this high-risk CVD population. shows that targeting intensive therapy to the 5% identified at highest polygenic risk, who have an odds ratio of 3.3 according to Khera and colleagues , would prevent 4% of all events, which is 15% of those preventable if everyone is treated. Two-thirds of the events could be avoided if half the population is treated aggressively. However, it also needs to be recognized that the rate of drug-induced, serious adverse events—mostly declining kidney function, often leading to chronic failure, syncope (fainting), and hypotension—was considerably higher (1.4× to 2.5×, depending on criteria) in the intensive treatment group than the decline in cardiovascular events . Furthermore, the NNT for primary events per year is just over 60 for all-cause deaths, and cardiovascular deaths is double that, at around 15 over a decade. This means that the 3 months of QALYs per patient is more likely to reflect 1 in 15 or more patients avoiding an event such that they meet their normal life expectancy rather than meeting an early death, with ambiguous benefit to the remainder. Meta-analysis of the FINRISK, Framingham cohort, and UK Biobank studies has identified PRSs based on either 46,000 or 1.7 million SNPs for which the top and bottom quintiles are differentiated by 15 years in the age of cumulative incidence of 10% acute coronary events, with that threshold never met in low-risk women. Assuming a constant response rate to therapy of 25% for all genetic risk thresholds, targeting the top quintile would prevent one-third of deaths in men, halving the NNT . The goal for therapeutic intervention in diabetes is not so much about prevention of disease onset as about prevention of progression to serious morbidities and mortality due mainly to CVD but also retinopathy, limb amputation, and internal organ damage. The frontline treatment has long been metformin , which reduces blood glucose for the most part by inhibition of mitochondrial gluconeogenesis. Its prescription to the vast majority of type 2 diabetics is largely based on perceptions following a 1998 study implying a 35% reduction in mortality in overweight patients, reinforced by the 2002 United Kingdom Diabetes Prevention Program conclusion that metformin reduced diabetes incidence 18% over a decade . The drug is well tolerated by over 80% of patients and relatively inexpensive at around US$50 a year, so even if only a minority of patients benefit, there is little perceived harm in widespread usage. However, meta-analyses have failed to confirm that there is in fact long-term benefit, with one prominent study in 2012 concluding that the drug is as likely to increase all-cause mortality by 31% as to reduce it by 25%. Part of the difficulty is that the early analyses excluded patients also receiving sulfonylureas, which multiple studies have associated with elevated mortality and cardiovascular event rate and are given as supplementary therapy to patients whose glucose levels are not controlled by metformin (such nonresponders may be more likely to have poor outcomes) . Similarly, the ACCORD study found that intensive control of glycemia and lipids for long-term diabetics can increase all-cause mortality, emphasizing the importance of new strategies to guide prescription. In the past decade, a new generation of drugs have emerged that act more broadly to reduce the so-called ominous octet of diabetes pathologies : hepatic glucose production, pancreatic glucagon secretion by α cells, insulin secretion by β cells, neurotransmitter dysfunction, cardiac endothelial damage, reduced glucose uptake in muscle, elevated lipolysis, and renal glucose reabsportion. Because hyperglycemia is not thought to be the primary cause of cardiovascular pathophysiology, which is responsible for 80% of diabetes mortality, therapeutic intervention increasingly focuses on CVD. Agonists of the GLP-1 receptor and SGLT2 inhibitors have been shown in large randomized clinical trials to reduce stroke, myocardial infarction, and cardiovascular death by approximately 20% in diabetic patients with CVD, independent of Hb1Ac (and, hence, blood glucose) reduction. Drugs like liraglutide (Victoza) and exenatide (Byetta) are currently over 100 times the cost of metformin. Although the price will presumably reduce when they become generic in the next few years, it would seem desirable to identify that subset of the diabetic population with CVD who are most likely to benefit and to assess whether patients at highest risk of CVD are more responsive. One study found that a lipid PRS did not differentiate metformin-responsive low-density lipoprotein (LDL) reduction , but it would be more informative to know whether the type 2 diabetes or CVD PRS identifies patients who are more likely to respond to the new generation of GLP-1 receptor and SGLT2 inhibitor drugs. Economic modeling has already argued that the combination of metformin plus a DPP-4 inhibitor, another drug that reduces mortality around 20%, though expensive at approaching ₤20,000/QALY gained , is within the accepted guidelines for cost-effectiveness. Given that adiposity, measured both generally as body mass index (BMI) and centrally as waist-to-hip ratio (WHR) adjusted for BMI (WHRadjBMI), is strongly associated with metabolic syndrome, it has been argued that weight reduction is among the most important public health goals for the coming decades. Mendelian randomization studies of the UK Biobank with a PRS genetic instrument explaining approximately 2% of BMI indicate that an increase of one standard deviation unit (approximately 5 kg/m 2 ) causally doubles the risk of type 2 diabetes , and similar results are seen in the DIAGRAM study for WHR, with an independent PRS explaining approximately 1% of the trait. Although it is notoriously difficult to maintain a 10% reduction in weight long term, recent studies indicate that intensive lifestyle intervention is effective and can produce remission of diabetes . Moreover, even moderate exercise significantly reduces cardiovascular mortality across urban and rural settings in low-, moderate-, and high-income countries . Major depressive disorder (MDD) has rapidly become the second-leading source of morbidity globally. At least 1 in 20 people will lose meaningful quality of life-years to depression, and total costs to the healthcare system alone in the US have been estimated at up to US$100 billion per year . Even though meta-analyses provide little indication that antidepression medications (ADMs) lead to better rates of long-term remission than behavioral therapies , there are several reasons why the adoption of ADMs is only likely to increase in the coming years: the short-term benefits for symptom relief are superior, there is a severe shortage of trained therapists, and insurance coverage for mental health is unreliable . However, only around half of patients will respond to the first ADM they are prescribed, and most of the two-thirds who do benefit will need to test three to five drugs before finding a regimen that is both effective and tolerated. Including hospitalization, this may cost tens of thousands of dollars, so once again, there are enormous economic (and clinical) benefits to be gained from genomic predictive evaluation. The heritability of MDD (at about 30%) is one of the lowest of all common disorders, in part reflecting broad heterogeneity in environmental contributions, and accordingly, GWAS has made few inroads. Despite a sample size including 120,000 cases, the largest studies to date have discovered just 15 replicated loci for MDD , and a genome-wide PRS explains less than 1% of the variance and thus shows little predictive potential. An even larger GWAS for neuroticism turned up 116 independent loci, and the polygenic score is highly correlated with that for depression, but it still explains just a small fraction of the variance. Furthermore, the genetic correlation between Asian- and European-ancestry populations is notably less than 1, and somewhat differential genetic risk is implicated for early- and late-onset disease . Similarly, GWAS has not yet identified any replicated genome-wide significant loci for ADM response , even though common variants explain over 40% of the variance. There is some preliminary evidence that quartiles of polygenic scores for openness, neuroticism, and conscientiousness differentiate response to selective serotonin reuptake inhibitors (SSRIs) and possibly predict remission, but differences in sign of effect at 4 and 8 weeks of treatment caution that many more than 1,000 subjects need to be evaluated. There is also an extensive body of literature evaluating two dozen candidate genes for pharmacogenetic modulatory effects. These mostly target serotonin production and signaling (because they are the targets of the major classes of ADM) as well as drug metabolism and bioavailability. Meta-analyses implicate variation in the serotonin transporter and receptors (the 5HTTLPR in SLC6A4 , HTR2A ) as well as cytochrome P450s and a few other loci, though with apparent heterogeneity across ethnicities, unexpected effects such as a heterozygote advantage for V66M in BDNF , and overall lack of Bonferroni-adjusted significance. It should be noted that, with odds ratios in the vicinity of 1.5, current sample sizes of fewer than 3,000 patients are underpowered, so much larger pharmacogenetics GWASs are urgently required. Despite this, CPIC has released guidelines for implementation of pharmacogenetics for SSRI dosing based on CYP2D6 and CYP2C19 genotypes , and at least four companies are offering combinatorial pharmacogenetics diagnostic tests (CPGx: the Neuropharmagen , GeneSight , Admera PGxPsych, and MD labs Rχight panels). I could not find publications describing clinical assessment of the latter two, but the first two both show meaningful improvements in 12-week response (for example, reducing depression symptoms in 50% of patients when physician-ordered ADMs are congruent with test predictions, relative to 35% of those who are not given a CPGx test and 39% for all tested patients regardless of congruence). One of the companies also reports cost-effectiveness of the test, saving almost US$4,000 in medication costs per patient treated solely by a primary care provider, relative to US$2,500 for the one-time diagnostic . Because these are proprietary tests, it is impossible from the published data to know which variants are most useful, and clearly much larger independent studies will be needed to comprehensively evaluate utility across dozens of drugs. It is highly likely, though, that much of the benefit comes from the negative predictive value of avoiding ADMs that are predicted either to lead to intolerable side effects or not to alleviate symptoms. Potential healthcare savings in this domain run plausibly to tens of billions of dollars per year once validated mature pharmacogenetic tests are available. In the US alone, over 1.5 million postmenopausal women suffer clinically significant bone fractures each year. Lifetime risk of both hip and vertebral fractures is over 15% , with hip fracture seriously impeding the mobility of half of the patients for an extended period of time, leading to a requirement for long-term ambulatory care in a third of cases. It is not clear whether the 10% elevation in mortality after a fall leading to hip fracture is a consequence of fracture or comorbid with underlying disease. Hormone replacement therapy with estrogen and progesterone was consistently found to reduce fracture rates by around 40%, but following the Women’s Health Initiative’s detection of meaningfully elevated rates of invasive breast cancer and cardiovascular morbidity, it is no longer widely adopted. Instead, bisphosphonate inhibitors of bone resorption such as alendronate , which are similarly effective without the serious adverse effects, have become standard of care, also showing cost-effectiveness in high-risk groups over the age of 70 . Recently, a monoclonal antibody inhibitor of sclerostin that simultaneously increases bone formation and decreases bone resorption, romosozumab, was shown in two randomized clinical trials to reduce both vertebral and nonvertebral fracture rates . Direct comparison of this treatment with alendronate alone in high-risk women with bone mineral density (BMD) in the bottom decile and a history of fracture demonstrated remarkable efficacy consistent with an NNT for vertebral fracture of less than 20 (6.2% relative to 11.9% over a 2-year follow-up, which would likely be around 10 relative to placebo). However, the US Food and Drug Administration (FDA) has disallowed registration of the drug because the data show a worrying elevation of the serious adverse cardiovascular event rate from 1.9% to 2.5%. A Japanese study argued that as few as four SNPs can differentiate high- and low-risk deciles for vertebral fracture by as much as 10-fold. Concerted robust genomic and clinical evaluation thus has potential to stratify patients for treatment: if a polygenic score achieves a relative risk of five for the top decile, then 20% of cases could be prevented with a precision of over 50%. An analysis of the UK Biobank for osteoporosis and general bone fracture is less promising but did not target the low-BMD group. In any case, genetics could rescue a treatment that has the potential to prevent loss of quality of life for hundreds of thousands of women each year while also ensuring that the costs to the healthcare system are contained. With the advent of clinically relevant polygenic scores , the time has come for research designed not just to identify high-risk groups but also to evaluate therapeutic response rates across the distribution of scores. The oft-reported measures of sensitivity and specificity are not generally likely to be clinically useful, as scores that identify the upper two deciles of the risk distribution generally have sensitivities less than 50%. assumes case-control comparisons but might actually be more informative in relation to the prediction of therapeutic responses in patients already diagnosed with disease. In some situations, response may be highly correlated with risk of disease, but that cannot be assumed, so more GWASs of disease progression therapeutic response are needed. Although it is often assumed that all patients should receive treatment, considerations of cost, adverse side effects, development of resistance, and patient choice all mitigate in some cases toward preferential treatment of the patients at highest risk of progression to life-threatening disease and/or most likely to respond to therapy. Meaningful reductions in the NNT while ensuring high prevention rates will be most attainable if therapy can be targeted to individuals predicted to have the highest response rates. It should generally be possible to prevent 20% to 50% of cases by treating between 5% and 20% of the patients. Conversely, negative prediction may identify those least likely to benefit, which is likely to become increasingly important in relation to expensive next-generation biologics and in markets emphasizing cost control. Two further critical components of this evidence-based approach to clinical intervention are the costs of treatment and patient rights. Economic modeling of healthcare costs is notoriously difficult, but it is nevertheless clear that either the government or private health insurers must bear the short-term costs of expensive new-generation medications such as biologics. Any consequent rationing of options must be evaluated alongside patient rights and desires. argues that some individuals are likely to demand treatment irrespective of where they fit in terms of risk if the drug has been approved for general use; others may prefer not to take drugs as far as possible and will see genomic information as empowering . Genetic risk scores are an important step toward personalized genomic medicine and will need to be implemented with respect to the specific clinical and economic circumstances of each disease. Box 1. Genetic risk assessment for common disease This Review makes three major claims: (1) that evaluation of the appropriate level of prescription medication usage should be a public health priority, (2) that implementation of this policy will vary widely according to disease and therapy, and (3) that genomic diagnosis has an important role to play in stratifying need for specific drugs. There are two broad domains that should be considered: prevention and control of common chronic diseases and treatment of acute, less-common diseases with expensive new-generation drugs and biologics. The latter engage economic issues that are largely the concern of medical providers; it is the former that presents the most prospects for patient-driven solutions. A typical situation involves a middle-aged adult who learns that he/she has elevated biomarkers indicative of high risk for heart disease. Because medical guidelines call for reduction of cholesterol or blood pressure in such cases, a medication is prescribed and almost always accepted. Over the next several months, dosages and specific medications are adjusted to alleviate negative side effects such as nausea and poor mood, or the patient adjusts to the discomfort. Within a couple of years, many will have become noncompliant, with unknown consequences for future care. One response is to develop technologies that track and enforce compliance; another is to engage in thoughtful discussion of whether prescription medication is most appropriate. To this end, we might envisage all prescriptions requiring three levels of conversation with the patient. The first is a comprehensive discussion of adverse outcomes, not just of labeled serious risks and drug interactions but also of prevalent negative impacts on quality of life. There is likely to be wide variation in the extent of such doctor–patient conversations currently. The second is understanding of the concepts of risk stratification and of precision, including acceptance that if a prescribed drug prevents 20% of primary outcomes, then 80% of such events still occur. Either a large proportion of patients do not respond to the treatment or the response (lower blood pressure, cholesterol) is not sufficient to prevent the cardiovascular event, and there is patient-to-patient variability. The third is to aim to empower patients to make their own decisions about whether their total medical profile places them in a risk category for which the benefit of medication is sufficient to overcome side effects, expense, and inconvenience. Some people will choose to do all they can to prevent an event (no one wants to be denied or refuse treatment only to develop a life-threatening state), whereas some will prefer not to go on medications if they can possibly avoid them (recognizing that no treatment is anywhere near guaranteed to prevent progression). Clearly, extensive physician and patient education and communication will be required to optimize patient engagement. Although it is likely that the majority of people will continue to choose medication, the availability of validated risk stratification could have an important influence on future prescription rates. A major goal for genomic medicine should thus be the development of predictive scores across the full spectrum of disease, integrating genetic evaluations of risk of disease, risk of disease progression, and response to therapy alongside best-practice clinical profiling. This Review makes three major claims: (1) that evaluation of the appropriate level of prescription medication usage should be a public health priority, (2) that implementation of this policy will vary widely according to disease and therapy, and (3) that genomic diagnosis has an important role to play in stratifying need for specific drugs. There are two broad domains that should be considered: prevention and control of common chronic diseases and treatment of acute, less-common diseases with expensive new-generation drugs and biologics. The latter engage economic issues that are largely the concern of medical providers; it is the former that presents the most prospects for patient-driven solutions. A typical situation involves a middle-aged adult who learns that he/she has elevated biomarkers indicative of high risk for heart disease. Because medical guidelines call for reduction of cholesterol or blood pressure in such cases, a medication is prescribed and almost always accepted. Over the next several months, dosages and specific medications are adjusted to alleviate negative side effects such as nausea and poor mood, or the patient adjusts to the discomfort. Within a couple of years, many will have become noncompliant, with unknown consequences for future care. One response is to develop technologies that track and enforce compliance; another is to engage in thoughtful discussion of whether prescription medication is most appropriate. To this end, we might envisage all prescriptions requiring three levels of conversation with the patient. The first is a comprehensive discussion of adverse outcomes, not just of labeled serious risks and drug interactions but also of prevalent negative impacts on quality of life. There is likely to be wide variation in the extent of such doctor–patient conversations currently. The second is understanding of the concepts of risk stratification and of precision, including acceptance that if a prescribed drug prevents 20% of primary outcomes, then 80% of such events still occur. Either a large proportion of patients do not respond to the treatment or the response (lower blood pressure, cholesterol) is not sufficient to prevent the cardiovascular event, and there is patient-to-patient variability. The third is to aim to empower patients to make their own decisions about whether their total medical profile places them in a risk category for which the benefit of medication is sufficient to overcome side effects, expense, and inconvenience. Some people will choose to do all they can to prevent an event (no one wants to be denied or refuse treatment only to develop a life-threatening state), whereas some will prefer not to go on medications if they can possibly avoid them (recognizing that no treatment is anywhere near guaranteed to prevent progression). Clearly, extensive physician and patient education and communication will be required to optimize patient engagement. Although it is likely that the majority of people will continue to choose medication, the availability of validated risk stratification could have an important influence on future prescription rates. A major goal for genomic medicine should thus be the development of predictive scores across the full spectrum of disease, integrating genetic evaluations of risk of disease, risk of disease progression, and response to therapy alongside best-practice clinical profiling. S1 Table NNT for select published therapeutic responses. NNT, number needed to treat. (DOCX) Click here for additional data file. S1 Text Further background and five additional conditions. (DOCX) Click here for additional data file. |
Pharmacogenomic and epigenomic approaches to untangle the enigma of IL-10 blockade in oncology | 64d54609-11f3-4d55-ae5f-6ee9c973dba3 | 10941350 | Pharmacology[mh] | The use of immunotherapy as a novel therapeutic approach in preventing cancer has become widespread (Ref. ). Immune checkpoint blockade modalities targeting PD-1 and CTLA-4 provide long-lasting immune responses with established therapeutic benefits for some cancer patients (Refs – ). Although, targeting cytokines is considered a crucial approach in immunotherapy as evidenced in the treatment of solid tumours, such as renal cell carcinoma (RCC) and melanoma, only interferons (IFNs) and IL-2 have been approved by Food and Drug Administration (FDA) for use as cancer therapies (Ref. ). IL-10 is considered one of the very promising targets for immunotherapy; however, its controversial role in carcinogenesis hinders the applicability of benefiting from its blockade in cancer treatment (Ref. ). IL-10 has been shown to possess both anti- and pro-inflammatory roles in cancer (Ref. ). The intensity of the immunological response to both self and foreign antigens is reduced by IL-10. In light of this, IL-10 signalling blockage improves vaccine-induced T-cell responses and tumour growth inhibition (Ref. ). On the other hand, tumour regression is also induced by exogenous IL-10, particularly PEGylated (PEG)-IL-10 (Ref. ). This paradoxical data urges the need to investigate the role of pharmacogenomics, epigenetics and genetic variants in IL-10 and its receptor to identify those patients that might benefit from IL-10 targeted therapies. In this review, the authors will address the role of IL-10 in cancer, the currently available IL-10-based immunotherapy, the epigenetic regulation of IL-10 and the single nucleotide polymorphisms (SNPs) present in IL-10 that might influence patient responses to therapy. Cancer definition has been revolutionized over the past few decades from the concept of being abnormal cells to a plethora of complex network that is made up of both neoplastic cells with their surrounding stroma (Refs , , , ). The multifaceted dynamic milieu of cellular components along with non-cellular compartments portrays what is now known as the tumour microenvironment (TME) (Refs , , ). Such a microenvironment could control the aggressiveness, rate of growth and metastatic potential of the tumour (Refs – ). These cellular components include immune cells such as T lymphocytes (Refs – ), regulatory T cells (Tregs) (Ref. ), B lymphocytes, natural killer (NK) cells (Refs , – ), mesenchymal stem cells (Refs , ), tumour-associated-macrophages (Refs , ), tumour-associated neutrophils (Refs , ), dendritic cells (DCs) (Ref. ) and non-immune cells such as pericytes (Ref. ), adipocytes (Refs , ), myeloid-derived suppressor cells (MDSCs) (Refs – ) and cancer-associated fibroblastic cells (Refs , ). Interestingly, these immune cells drive the production of soluble components that include cytokines, chemokines, growth factors and extra-cellular remodelling enzymes (Refs , ). Such mediators, particularly cytokines, assist in the communication between the cellular TME components and cancer cells as shown in (Refs , ). Interleukin-10 (IL-10) One of these cytokines is the paradoxical interleukin ‘IL-10’, which remains an integral part of several malignancies, and regulates the secretion of other cytokines. This pleiotropic cytokine was characterized early in the late 1980s and was named cytokine synthesis inhibitory factor (Refs , ). Later on, six immune mediators (IL-10, IL-19, IL-20, IL-22, IL-24 and IL-26) were grouped into the IL-10 family of cytokines based on their similarities with respect to the structure and location of their encoding genes, their primary and secondary protein structures and the receptor complexes (Refs – ). Out of these six members, IL-10 has been recognized as a major member mediating different functions within the immune system and cancer cells (Ref. ). Paradoxical role of IL-10 in oncology IL-10 produced by immune cells IL-10 has also been causally linked to immunity in both the innate and adaptive immune arms. Different triggers have been shown to induce IL-10 production in various immune cells (Ref. ). The main source of IL-10 appears to be monocytes, and different T-cell subsets (Ref. ). Moreover, DCs, B cells, NK cells, mast cells, as well as neutrophils, and eosinophils can also synthesize IL-10 (Ref. ). During infection, macrophages are considered a major source of IL-10. Several toll-like receptors (TLRs), including TLR2, TLR4, TLR5, TLR7 and TLR9 have been shown to induce IL-10 production in macrophages and DCs (Refs – ). Also, IL-10 production in DCs is enhanced by the co-activation of TLR2 and Dectin-1 (Ref. ). Following exposure to IL-10, DCs can initiate the development of regulatory T cells (Tregs) that limit these effector responses (Refs , ). B cells also express several TLRs which have been shown to promote IL-10 production including TLR2, TLR4 or TLR9 (Refs – ). Nonetheless, it is also worth mentioning that IFN- α augments IL-10 production if combined with TLR agonists from B cells (Refs , ). Additionally, neutrophils produce IL-10 in response to TLR and C-type lectin co-activation through myeloid differentiation primary response 88 (MyD88) and spleen tyrosine kinase (SYK), respectively (Ref. ). The key producer of IL-10 is Treg cells that produce other immunoregulatory cytokines, such as TGF- β (Ref. ). The production and action of both cytokines IL-10 and TGF- β are involved in a positive feedback loop (Ref. ). Concerning the mechanism of IL-10 production from Tregs, it has been shown that IL-2 and IL-4 induce IL-10 production from Tregs (Refs – ). Additionally, a study concluded that TGF- β is required for the differentiation and production of IL-10 from Tregs (Ref. ). IL-2 and IL-27 are responsible for inducing IL-10 expression in cytotoxic CD8 + T cells (Ref. ). However, IL-12 and IL-23 prime CD8 + and CD4 + T cells for IL-10 production (Refs – ). Some studies reported IL-10 immunosuppressive effects such as inhibiting IFN- γ and TNF- α production by NK cells in-vitro (Ref. ). However, other studies reported IL-10 immunostimulatory effects via the promotion of NK cell cytotoxicity in preclinical models (Refs , ). Adding to the complexity of this master cytokine, one of the studies has shown that the exposure of malignant cells to IL-10 resulted in a reduction in their sensitivity to cytotoxic T cells but an increase in NK cell cytotoxicity (Ref. ). This might suggest that IL-10 contributes to fighting malignant cells by stimulating the immune innate arm (Ref. ). As mentioned earlier, one of the main drivers of IL-10 expression in many immune cells is TLR signalling (Ref. ). TLR ligation leads to the activation of several downstream pathways, including the mitogen-activated protein kinase (MAPK) pathway and the phosphoinositide 3-kinases (PI3K)/AKT pathways (Ref. ). Activation of the MAPK and downstream extracellular-signal-regulated kinase (ERK1 and ERK2) are critical for IL-10 production in macrophages and DCs in response to several TLR activators (Refs , , , ). The MAPK pathway eventually results in the activation of several transcription family members such as the activator protein-1 (AP-1) which activates IL-10 transcription (Refs , , , ). Moreover, ERK and p38 also contribute to IL-10 production in TLR-stimulated macrophages, monocytes, and DCs (Refs , – ). Both ERK and p38 may function cooperatively in their regulation of IL-10 production, through their joined activation of mitogen and stress-activated protein kinases (MSK1 and MSK2) which promote IL-10 production in TLR-stimulated macrophages. Downstream of MSK1 and MSK2 are the transcription factors, cAMP-response element binding protein (CREB), and AP-1, which also bind and transactivate the IL-10 promoter (Refs – ). Moreover, it is worth mentioning that both ERK and p38 were shown also to directly phosphorylate Sp1, one of the IL-10 transcription factors (Refs , ). The phosphatidylinositol-3-kinase (PI3K/AKT) pathway also contributes to IL-10 expression in myeloid cells either by antagonizing glycogen synthase kinase 3 beta (GSK3- β ), a constitutively active kinase that inhibits the production of IL-10 or through ERK and mammalian target of rapamycin (mTOR) and STAT-3 activation (Refs – ). IL-10 produced by cancer cells IL-10 has been linked to many types of cancers such as gastric cancer (Ref. ), cervical cancer (Ref. ), lung cancer (Ref. ), breast cancer (Ref. ), colon adenocarcinoma (Ref. ), head and neck cancer (Ref. ), oesophageal cancer, nasopharyngeal cancer, oral cancer (Ref. ) and colorectal cancer (Ref. ). Its role in tumourigenesis is reported to be controversial where it could be a tumour suppressor or promoter. However, due to the complex nature of IL-10, its role in shaping the TME remains a gap that needs further research. Most of the literature is directed towards presenting the pro-tumoural activity of IL-10 in different oncological settings. This could be through the positive feedback loop with STAT-3, as IL-10 has been shown to activate STAT-3 resulting in the upregulation of B-cell lymphoma 2 (BCL-2) or B-cell lymphoma-extra-large (BCL-xL), and stimulation of cell proliferation by cyclins D1, D2, B, and proto-oncogene c-Myc , thus contributing to cancer progression (Ref. ). On the other hand, IL-10 immunosuppressive activity has been reported on macrophages and DCs, where it was found to dampen antigen presentation, cell maturation, and differentiation resulting in tumour immune evasion as shown in (Ref. ). Several studies have examined the role of IL-10 in different types of malignancies as listed in below. Previous studies highlighted a significant correlation between IL-10 and the percentage of plasma cells in multiple myeloma patients as it induces the proliferation of plasma cells (Refs – ). Other studies indicated an elevation of IL-10 in different haematological malignancies such as Hodgkin lymphoma and non-Hodgkin lymphoma (Refs , ). High IL-10 levels were reported to be associated with a shorter survival rate among patients with diffuse large-cell lymphoma (Ref. ). Similarly, high IL-10 levels was found to be a prognostic factor in peripheral T cell lymphoma, which can lead to worsening of overall survival, low complete response rate, and higher early relapse rate (Ref. ). Moreover, elevated IL-10 at diagnosis was found to be an independent prognostic marker in adult hemophagocytic lymphohistiocytosis patients in order to find the right treatment strategy (Ref. ). The riddle of IL-10 at the tumour-immune cell synapse The balance between pro-inflammatory and anti-inflammatory signals is generally crucial for the maintenance of normal physiology and the prevention of cancer and a wide variety of diseases (Refs , – ). In the context of IL-10, it plays a dual function acting either as a pro-inflammatory or an anti-inflammatory mediator (Ref. ). Regarding its role in cancer, studies have reported that IL-10, secreted by tumours or tumour-infiltrating immune cells, has allowed malignant cells to escape from the immune surveillance (Refs – ). In a study by Neven et al ., IL-10 knockout in mice promoted the development of colon cancer. Moreover, the same study showed that humans deficient in IL-10 signalling molecules were more prone to develop lymphomas at a younger age (Ref. ). As an anti-inflammatory cytokine, IL-10 is considered crucial for the homoeostasis of the anti-inflammatory Tregs and the suppression of proinflammatory IL-17-expressing T cells. However, IL-10 action depends on multiple factors such as targeted cells, other stimuli, and the time and duration of its effect (Ref. ). Though, with many rationales presented, a question mark continues to rise to explore the nature of this complex cytokine. Is IL-10 blockade a possible option as a novel immunotherapeutic approach for cancer patients? Controversial data exists regarding the effectiveness of IL-10 immunotherapy in cancer (Ref. ). Cancer vaccines that utilized monoclonal antibody (mAb) against IL-10 receptors succeeded to increase CD8 + T cell responses and to inhibit tumour growth whether injected intraperitoneally or subcutaneously (Refs , ). The beneficial effect of IL-10 blockade is best explained through the inhibition of IL-10-induced suppression of DCs and prevention of their antigen presentation capacity by decreasing the expression of MHC class II and co-stimulator molecules (Ref. ). Thus, DC-based vaccinations that disrupt IL-10 signalling provide more potent anti-tumour responses (Ref. ). On the contrary, others claimed that antibodies targeting IL-10R had no protective effect against tumour growth when used with vaccines containing adjuvants that do not induce IL-10, such as the TLR3 ligand poly (I: C) or anti-CD40 agonistic antibodies (Ref. ). Such a controversy regarding the effectiveness of therapeutic immunization could be explained and summarized by vaccine-induced IL-10 rather than IL-10 produced by tumours (Ref. ). It was previously reported that the prognosis of cancer patients is inversely correlated with elevated serum and tumour IL-10 levels (Ref. ). Despite that, exogenous administration of IL-10 was tested in clinical studies, and resulted in immunological activation, as evidenced by higher granzymes and IFN in the serum of those patients receiving treatment. Pegylated recombinant (PEG) murine IL-10 promoted rejection of tumours and metastases by enhancing CD8 + T cell-mediated immune responses (Ref. ). In addition, PEG-IL-10 exhibited immunologic and clinical advantages in solid tumours in clinical trials, particularly in RCC and uveal melanoma (Ref. ). CD8 + tumour-infiltrating lymphocytes (TILs) in metastatic melanoma co-upregulate IL-10R and PD-1. While PD-1 blockade or IL-10 neutralization as monotherapies were insufficient to produce anti-tumour activity, combination therapies of PD-L1 blockers with IL-10R blockers were shown to exert anti-tumour effects by enhancing T cell responses, thereby suppressing the tumour growth (Ref. ). Similarly, mice with ovarian tumours treated with PD-1 blocking antibodies have higher levels of IL-10 in their serum and ascites. Moreover, infiltration of immunosuppressive MDSCs was reduced, and the immunological activity was increased when IL-10 and PD-1 blockers were used together (Ref. ). On the other hand, a multi-centred trial involving 111 patients with advanced malignant solid tumours unresponsive to previous therapies revealed that anti-PD-1 treatment (pembrolizumab or nivolumab) in combination with PEG-IL-10 offered a new therapeutic option (Ref. ). Most of the immune cells express IL-10 receptors and can activate subsequent downstream signalling pathways. Therefore, the paradox underlying the IL-10 blockade and whether it carries a beneficial or detrimental role in cancer treatment might be deciphered if we understood how exactly these cells react to IL-10 signalling through comprehensive genomic, epigenomic, and proteomic analysis. Epigenomic approach Epigenetic regulations include DNA methylation, histone modifications, histone acetylation, and the action of non-coding RNAs (ncRNAs) (Refs – ). Epigenetics arising from an alteration in the chromatin usually leads to alterations in gene expression. Moreover, epigenetic changes could either activate or suppress an oncogene or a tumour suppressor gene (Refs – ). It has been recently revealed that IL-10 is highly epigenetically regulated (Refs , ). It is worth noting that such a level of post-transcriptional regulation of IL-10 expression might be a relevant explanation for the differential expression and effects of IL-10 in different cells at the TME despite the existence of common pathways for IL-10 induction as previously mentioned in this review, via the action of non-coding RNAs including microRNAs (miRNAs) (Refs , ), long non-coding RNAs (lncRNAs) (Refs – ), and circular RNAs (circRNAs) (Refs , , ). Epigenetic modulation of IL-10 on the post-transcriptional has been highly evident in several reports via DNA methylation, histone modifications and histone acetylation, which have been extensively studied before in several studies (Refs , ) and recently reviewed in (Ref. ). However, the epigenetic regulation of IL-10 via ncRNAs, miRNAs, lncRNAs and circRNAs is recently being explored. Therefore, a closer approach to exploring the epigenetic regulation of IL-10 via ncRNAs could aid in understanding the complex nature of this cytokine. microRNAs (miRNAs) regulating IL-10 miRNAs are short ncRNAs around 18–25 nucleotides long that widely exist in plants, viruses and animals (Refs , , , ). These miRNAs can regulate gene expression by either degrading the mRNA target or by suppressing mRNA translation and reducing mRNA stability by binding to the 3′UTR (untranslated region) of a target gene (Refs , ). Thus, a miRNA could therefore inhibit or activate the expression of tumour suppressors or oncogenes. Generally, oncogenic miRNAs (oncomiRs) are found to be over-expressed in cancers, whereas miRNAs with tumour-suppressive function are found to be under-expressed (Refs , , ). When these oncomiRs or tumour suppressor miRNAs are inhibited or stimulated, respectively, cancer cell metastasis, proliferation and survival may be reduced, depending on the specific miRNA being affected and the type of cancer (Refs , , ). Moreover, some cancers are dependent on specific oncomiRs, and suppressing such oncomiRs could completely regress cancer growth (Refs , , ). Few studies have presented miRNAs that could modulate IL-10 expression. In a study, testing for the possible post-transcriptional modulation of IL-10R α and IL-10R β expression by miRNAs, three miRNAs were shown to have seed regions that target the 3′UTR of IL-10R α ; miR-15a, miR-185 and miR-211. These miRNAs were shown to inhibit the proliferation of IL-10-treated melanoma cells, while their inhibitors caused an increase in cell proliferation in melanoma (Ref. ). IL-10 was also shown to be targeted by several other miRNAs (Ref. ). Another study showed that miR-106a could bind to the 3′UTR of IL-10 and significantly downregulate its expression in-vitro (Ref. ). Two transcription factors; early growth response 1 (Egr1) and Sp1 were implicated in the induction of miR-106a, which consequently reduced IL-10 levels (Ref. ). Furthermore, an inverse relation was reported between Egr1-stimulated miR-106a and IL-10 levels. It is also worth mentioning that miR-106a is part of a cluster that is known to be dysregulated in 46% of human T-cell leukaemias. Thus, it was deduced that the promotion of leukaemic cell survival by IL-10 might be through its modulation via miR-106a (Ref. ). Another miRNA reported to positively regulate IL-10 was miRNA-4661. The miR-4661 binding to the 3′UTR of IL-10 resulted in a net increase in the half-life of IL-10. This action was favoured by preventing tristetraprolin (TTP) from binding to the IL-10 mRNA (Ref. ). TTP is an RNA binding protein that plays a critical role in regulating proinflammatory immune responses by destabilizing target mRNAs via binding to their AU-rich elements (AREs) in the 3′-UTRs of mRNAs (Ref. ). Moreover, miRNA/IL-10 interactions were reported in a study by Liu et al . revealing that miR-98-mediated post-transcriptional control could potentially be involved in fine-tuning IL-10 production in endotoxin tolerance (Refs , ). On the other hand, IL-10 was reported to upregulate miRNAs that contribute towards an anti-inflammatory response such as miR-187 or downregulate those that are highly pro-inflammatory, such as miR-155 (Ref. ). IL-10 was able to downregulate the induction of miR-155 induced by LPS (Ref. ). Moreover, in-vivo studies on mice deficient in miR-155, could not generate a protective immune response (Ref. ). Whereas in IL-10 mice-deficient cells, miR-155 levels were shown to highly increase. It was previously known that miR-155 could target a number of genes involved in the immune response, such as suppressor of cytokine signalling (SOCS), inhibitor of NK- κ B kinase subunit epsilon (IKBKE) and Fas-associated death domain (FADD). Thus, targeting this miRNA by IL-10 is likely to elucidate key mechanisms through which IL-10 exerts control in the cell. Another study uncovered details of the IL-10 pathway by examining the effect of IL-10 on miRNAs, using IL-10 deficient mice for expression. Ten miRNAs were found to be upregulated in IL-10 deficient mice (miR-19a, miR-21, miR-31, miR-101, miR-223, miR-326, miR-142-3p, miR-142-5p, miR-146a and miR-155) (Ref. ). miR-223 could hinder Roquin ubiquitin ligase by binding to its 3′UTR, eventually regulating IL-17 production and its inhibitor IL-10. Thus, this suggested a mechanism by which IL-10 could modulate the expression of IL-17 through miR-223. As previously mentioned, IL-10 can also induce the expression of anti-inflammatory miRNAs, such as miR-198 which is known to suppress TNF- α and IL-6. Consequently, this resulted in the promotion of an anti-inflammatory environment (Ref. ). Collectively, such interesting findings of the mutual interaction between IL-10 and miRNAs discussed in the previous section highlighted an important role in the miRNA-mediated regulation of IL-10 expression and provided new insights into the intertwined mechanistic details of such immunomodulatory cytokine. LncRNAs regulating IL-10 Long transcripts of RNA having more than 200 nucleotides, and not involved in protein translation are regarded as lncRNAs (Refs , , ). LncRNAs play a significant role in the occurrence and development of cancer and thus, regulate the expression of cytokines such as IL-10 and IFN- γ as reported in a study by Tang et al . on non-small cell lung cancer (NSCLC) (Ref. ). A large number of lncRNAs has been associated with cancer as recognized by genome-wide association studies on numerous tumours (Ref. ). They are believed to exhibit functions such as tumour suppression and promotion, hence depicting to have a promising novel approach as biomarkers and therapeutic targets for cancers (Ref. ). An increased expression of lncRNA SNHGI in cancerous breast cells of CD4 + TILs was also reported, whereas the expression of FOX and IL-10 was seen to be greatly reduced by siRNA SNHGI (Ref. ). Moreover, silencing the lncRNA cox-2 was believed to increase the expression of IL-10, Arg-1 and Fizz-1 in M2 macrophages (Ref. ). A study conducted by Zhou et al . reported reduced expression of IL-10 via suppression of lnc-LINC00473 (Ref. ). Additionally, increased expression of IL-10 has been associated with the knockdown of lncRNA growth arrest-specific transcript 5 (GAS5) and reduced CRC cell proliferation while knockout of GAS5 promoted CRC colony formation and proliferation (Ref. ). LncRNAs are known to regulate various signalling pathways such as TGF- β , STAT3, Hippo, EGF, Wnt, PI3 K/AKT and p53, whilst IL-10 is mostly involved in T-cell immune surveillance and suppression of cancer-associated inflammation. The expression of interleukins is regulated by lncRNAs that are known to be involved in various types of cancer. For instance, previous work by our group highlighted the potential of miRNA and lncRNA in the regulation of IL-10 in breast cancer, where miR-17-5p was identified as a dual regulator of TNF- α and IL-10. Additionally, knocking down the lncRNAs MALAT1 and/or H19 induced miR-17-5p and decreased TNF- α and IL-10 expression levels (Ref. ). Such reports ed the immune-activator potential of miRNAs and the oncogenic potential of lncRNAs in cancers by regulating immunological targets in the TME. Hence, the extensive research on the relationship between the lncRNAs regulating IL-10 in various cancer needs to be validated further to establish a valid therapeutic link (Ref. ). CircRNAs regulating IL-10 CircRNAs are recognized as special ncRNA molecules with a distinctive ring structure and play significant roles as gene regulators and are considered one of the recently discovered epigenetic factors (Refs , ). Abnormal production of circRNAs was found to influence the onset, progression and metastasis of cancer by acting as either tumour-suppressive or oncogenic factors (Refs , – ). This happens via interactions with proteins, miRNA sponge function and posttranscriptional regulation (Refs , , ). Moreover, a line of evidence showed that circRNAs play pivotal roles in the chemoresistance (Refs , ). Recently, specific circRNAs were found to possess an immunomodulatory function and alter the response of the TME by regulating the functions of tumour-infiltrating immune cells. For instance, CD4 + T cells activity is enhanced by circ0005519 through promoting the expression of IL-13 and IL-6 via affecting the expression of hsa-let-7a-5p (Ref. ). On the other hand, circNT5C2 could attenuate the immune response by targeting miR-448 and serve as an oncogene via promoting tumour proliferation and metastasis (Ref. ). Since IL-10 function represents an unresolved enigma in cancer therapy, and since circRNAs also have dual roles in cancer therapy, the comprehensive understanding of circRNAs regulating IL-10 expression and function might be the key to answering numerous questions. Therefore, several studies that shed the light on novel circRNAs regulating IL-10 in different oncological and non-oncological contexts are highlighted. Some circRNAs can either enhance or inhibit IL-10 production and consequently could either promote or inhibit carcinogenesis. For example, circMERTK was reported to inhibit IL-10 production in colorectal cancer. The same study came to the conclusion that circMERTK knockdown reduced the activity of CD8 + T cells, suggesting that circMERTK may affect immunosuppressive activity through the circMERTK/miR-125a-3p/IL-10 axis (Ref. ). According to another in vitro study, the downregulation of secreted PD-L1 by non-small cell lung cancer cells upon knockdown of circCPA4 resulted in the activation of CD8 + T cells in the TME (Ref. ). In addition, the study found that PD-L1 abrogation reduced the expression of IL-10 in CD8 + T cells (Ref. ). Circ103516 expression was found to be inversely correlated with IL-10 in inflammatory bowel diseases and thus it was postulated to play a proinflammatory role by sponging miR-19b. Additionally, it was discovered that circRNA HECTD1 contributed to the development of acute ischaemic stroke and that it was inversely linked with IL-10 production, suggesting that IL-10 played a protective function in acute ischaemic stroke (Ref. ). In another cardiac context, the synthesis of IL-10 was decreased as a result of the overexpression of circFoxo3, a circRNA that is crucial in avoiding cardiac dysfunction brought on by myocardial infarction (Ref. ). Downregulation of circ00074854 was reported to prevent polarization of M2 macrophages, which consequently alleviated the invasion and migration of hepatocellular carcinoma cells. According to the same study, macrophages exposed to exosomes produced by HepG2 cells that contained lower amounts of circ00074854 had significantly lower levels of IL-10 than those exposed to exosomes produced by HepG2 cells, demonstrating the direct relationship between Circ00074854 and IL-10 in different cancer settings (Ref. ). Furthermore, a recent study emphasized the potential of CircSnx5 as a therapeutic target for immunological disorders since it has the ability to regulate the immunity and tolerance induced by DCs. It is interesting to note that knockdown of CircSnx5 led to a significant drop in IL-10, whilst overexpression of CircSnx5 was found to block DC maturation and boost IL-10 expression (Ref. ). Another study focused on Circ0001598 as a potential target for treating breast cancer. It was discovered that circ0001598 regulates miR-1184 and PD-L1 via significantly increasing breast cancer proliferation, chemo-resistance and escape from immune surveillance. According to the same study mentioned above, depletion of circ0001598 increased breast cancer cells' susceptibility to Tratuzumab-induced CD8 + T cell cytotoxicity while decreasing the production of IL-10 (Ref. ). Another study showed that the knockdown of circRNA PLCE1 ablated IL-10 production from macrophages while PLCE1 encouraged the transformation of epithelial cells into mesenchymal tissue, thus aiding glycolysis in colorectal cancer (Ref. ). Another recently identified circRNA; circZNF609 has been linked to the pathogenesis of coronary artery disease, and forced overexpression of circZNF609 resulted in augmenting IL-10 expression (Ref. ). It is also worth mentioning that a recent study discovered that circRNA NF1-419 attenuated inflammatory factors such as IL-10 and aging markers to postpone the onset of senile dementia (Ref. ). Also, circGFRA1 has been indicated as a potential therapeutic target in prostate cancer; where Meng et al . reported that through a reduction in IL-10, knocking down circGFRA1 lessens the tumourigenic and immune-evading characteristics of prostate cancer cells (Ref. ). Zhang et al . also discovered the role of circ0005075 in mediating neuroinflammation where silencing of circ0005075 in rat models resulted in a decrease in IL-10 production and protected against neuro-inflammation (Ref. ). Another in vitro study revealed that circCdr1 overexpression enhanced the transcription of IL-10 both in naïve and pro-inflammatory macrophages (Ref. ). CircCHST15 was recently reported to possess an oncogenic role by promoting immune escape through upregulating the expression of IL-10 and a sponging effect on miR-155 and miR-194 in lung cancer (Ref. ). Additionally, circ_0046523 was found to promote carcinogenesis, mediate immunosuppression and abrogate CD8 + T cells function in pancreatic cancer via enhancing the secretion of IL-10 and TGF- β (Ref. ). Furthermore, silencing circDNMT3B was discovered to decrease cell survival, promote apoptosis and increase IL-10 production in rat intestinal tissue (Ref. ). Collectively, it is quite clear that the circRNAs that inhibit IL-10 production from tumour cells act as tumour suppressors, while those that increase the production of IL-10 from tumour cells promote oncogenesis, cell survival, drug resistance and mediate immunosuppression. This highlights the promising role of such circRNAs as novel immunotherapeutic molecules that could ablate IL-10 production and act as a powerful immunomodulatory anti-cancer treatment for several cancer patients. Pharmacogenomic approach: single nucleotide polymorphisms in IL-10 and its receptor IL-10 gene A very important basis for studies and research in IL-10 regulation is the examination of its genomic location and promoter structure. IL-10 gene encodes a protein, 178 amino acids long, which is secreted after cleavage to be comprised of 18 amino acids (Ref. ). At the proximal promoter sequence of IL-10 in the human genome, there is a TATA box located upstream of the translation start site, for several transcription family members, including nuclear factor- κ B (NF- κ B), STAT, specificity protein (Sp), CREB, CCATT enhancer/binding protein (C/EBP), c-musculoaponeurotic fibrosarcoma factor (c-MAF), which have been characterized as ‘critical’ factors in regulating IL-10 expression (Ref. ). IL-10 signalling Next, it is necessary to understand how IL-10 can signal through its receptor. IL-10R is a heterodimeric receptor complex composed of two chains (IL-10R α ‘R1’ and IL-10R β ‘R2’). The α -chain binds directly to IL-10, while the β -chain is subsequently recruited into the IL-10/IL-10R α complex (Ref. ). The binding of IL-10 to IL-10R α induces a conformational change in the receptor, allowing it to dimerize with IL-10R β . This dimerization leads to signal transduction in target cells (Ref. ). When the IL-10 complex is formed, tyrosine kinases Tyk2 and Jak1 become activated and phosphorylate specific tyrosine residues. This phosphorylation further activates the cytoplasmic inactive transcription factor; STAT-3 resulting in the translocation and transcriptional activation (Ref. ). IL-10 rapidly activates STAT-3 and remains phosphorylated over a sustained period, unlike the transient phosphorylation of IL-6 (Ref. ). The STAT-3 docking sites in IL-10R1 appear to be sufficient to induce IL-10-mediated proliferative responses (Ref. ). While IL-10R2 intracellular domain seems to provide the docking site for Tyk2. Thus, most IL-10-specific cellular functions appear to reside in the IL-10R1 chain, whereas IL-10R2 recruits the downstream signalling kinases (Ref. ). SNPs affecting IL-10 The IL-10 gene promoter and IL-10R have been found to include a significant number of SNPs (Refs , ). There is strong evidence that several of these polymorphisms are linked to the differential expression of IL-10 in vitro and in some situations, in vivo (Refs , , ). Some of these IL-10 variants have been associated with either low or high expression in several cancer types. For example, some genotypes have been evidenced to be correlated with a decreased expression of IL-10 and a higher risk to develop prostate cancer or non-Hodgkin's lymphoma (Refs , ). On the other hand, other evidence concluded that some IL-10 variants are associated with higher expression of IL-10 and consequently, an elevated risk for cancer development of multiple myeloma, cervical cancer and gastric cancer in patients harbouring a particular IL-10 variant (Refs – ). Also, it has been demonstrated that the IL-10 gene transcription and translation were impacted by the SNPs in the IL-10 promoter region, leading to aberrant cell division and emergence of breast cancer (Ref. ). summarizes most of the IL-10 polymorphisms documented in the literature and their association with cancer development and risk. Since IL-10 has a role in malignancy, it is regarded to be the subject of numerous disputes in the literature, whether it has a positive or negative effect. As a result, whether IL-10 blockage is effective as an immunotherapeutic strategy is another unsolved puzzle. This opens the door to a crucial query that might provide the answer. However, it has not yet been addressed in the literature. It remains unclear whether SNPs in the IL-10 or its receptor account for the varying effects of IL-10 inhibition on cancer treatment. A clinical investigation addressing the existence of SNPs in IL-10 or its receptors and their impact on the response to IL-10 therapy is necessary. These pharmacogenomic investigations will aid in the development of immunotherapeutic modalities by identifying the most qualified individuals to provide these cutting-edge drugs. One of these cytokines is the paradoxical interleukin ‘IL-10’, which remains an integral part of several malignancies, and regulates the secretion of other cytokines. This pleiotropic cytokine was characterized early in the late 1980s and was named cytokine synthesis inhibitory factor (Refs , ). Later on, six immune mediators (IL-10, IL-19, IL-20, IL-22, IL-24 and IL-26) were grouped into the IL-10 family of cytokines based on their similarities with respect to the structure and location of their encoding genes, their primary and secondary protein structures and the receptor complexes (Refs – ). Out of these six members, IL-10 has been recognized as a major member mediating different functions within the immune system and cancer cells (Ref. ). IL-10 produced by immune cells IL-10 has also been causally linked to immunity in both the innate and adaptive immune arms. Different triggers have been shown to induce IL-10 production in various immune cells (Ref. ). The main source of IL-10 appears to be monocytes, and different T-cell subsets (Ref. ). Moreover, DCs, B cells, NK cells, mast cells, as well as neutrophils, and eosinophils can also synthesize IL-10 (Ref. ). During infection, macrophages are considered a major source of IL-10. Several toll-like receptors (TLRs), including TLR2, TLR4, TLR5, TLR7 and TLR9 have been shown to induce IL-10 production in macrophages and DCs (Refs – ). Also, IL-10 production in DCs is enhanced by the co-activation of TLR2 and Dectin-1 (Ref. ). Following exposure to IL-10, DCs can initiate the development of regulatory T cells (Tregs) that limit these effector responses (Refs , ). B cells also express several TLRs which have been shown to promote IL-10 production including TLR2, TLR4 or TLR9 (Refs – ). Nonetheless, it is also worth mentioning that IFN- α augments IL-10 production if combined with TLR agonists from B cells (Refs , ). Additionally, neutrophils produce IL-10 in response to TLR and C-type lectin co-activation through myeloid differentiation primary response 88 (MyD88) and spleen tyrosine kinase (SYK), respectively (Ref. ). The key producer of IL-10 is Treg cells that produce other immunoregulatory cytokines, such as TGF- β (Ref. ). The production and action of both cytokines IL-10 and TGF- β are involved in a positive feedback loop (Ref. ). Concerning the mechanism of IL-10 production from Tregs, it has been shown that IL-2 and IL-4 induce IL-10 production from Tregs (Refs – ). Additionally, a study concluded that TGF- β is required for the differentiation and production of IL-10 from Tregs (Ref. ). IL-2 and IL-27 are responsible for inducing IL-10 expression in cytotoxic CD8 + T cells (Ref. ). However, IL-12 and IL-23 prime CD8 + and CD4 + T cells for IL-10 production (Refs – ). Some studies reported IL-10 immunosuppressive effects such as inhibiting IFN- γ and TNF- α production by NK cells in-vitro (Ref. ). However, other studies reported IL-10 immunostimulatory effects via the promotion of NK cell cytotoxicity in preclinical models (Refs , ). Adding to the complexity of this master cytokine, one of the studies has shown that the exposure of malignant cells to IL-10 resulted in a reduction in their sensitivity to cytotoxic T cells but an increase in NK cell cytotoxicity (Ref. ). This might suggest that IL-10 contributes to fighting malignant cells by stimulating the immune innate arm (Ref. ). As mentioned earlier, one of the main drivers of IL-10 expression in many immune cells is TLR signalling (Ref. ). TLR ligation leads to the activation of several downstream pathways, including the mitogen-activated protein kinase (MAPK) pathway and the phosphoinositide 3-kinases (PI3K)/AKT pathways (Ref. ). Activation of the MAPK and downstream extracellular-signal-regulated kinase (ERK1 and ERK2) are critical for IL-10 production in macrophages and DCs in response to several TLR activators (Refs , , , ). The MAPK pathway eventually results in the activation of several transcription family members such as the activator protein-1 (AP-1) which activates IL-10 transcription (Refs , , , ). Moreover, ERK and p38 also contribute to IL-10 production in TLR-stimulated macrophages, monocytes, and DCs (Refs , – ). Both ERK and p38 may function cooperatively in their regulation of IL-10 production, through their joined activation of mitogen and stress-activated protein kinases (MSK1 and MSK2) which promote IL-10 production in TLR-stimulated macrophages. Downstream of MSK1 and MSK2 are the transcription factors, cAMP-response element binding protein (CREB), and AP-1, which also bind and transactivate the IL-10 promoter (Refs – ). Moreover, it is worth mentioning that both ERK and p38 were shown also to directly phosphorylate Sp1, one of the IL-10 transcription factors (Refs , ). The phosphatidylinositol-3-kinase (PI3K/AKT) pathway also contributes to IL-10 expression in myeloid cells either by antagonizing glycogen synthase kinase 3 beta (GSK3- β ), a constitutively active kinase that inhibits the production of IL-10 or through ERK and mammalian target of rapamycin (mTOR) and STAT-3 activation (Refs – ). IL-10 produced by cancer cells IL-10 has been linked to many types of cancers such as gastric cancer (Ref. ), cervical cancer (Ref. ), lung cancer (Ref. ), breast cancer (Ref. ), colon adenocarcinoma (Ref. ), head and neck cancer (Ref. ), oesophageal cancer, nasopharyngeal cancer, oral cancer (Ref. ) and colorectal cancer (Ref. ). Its role in tumourigenesis is reported to be controversial where it could be a tumour suppressor or promoter. However, due to the complex nature of IL-10, its role in shaping the TME remains a gap that needs further research. Most of the literature is directed towards presenting the pro-tumoural activity of IL-10 in different oncological settings. This could be through the positive feedback loop with STAT-3, as IL-10 has been shown to activate STAT-3 resulting in the upregulation of B-cell lymphoma 2 (BCL-2) or B-cell lymphoma-extra-large (BCL-xL), and stimulation of cell proliferation by cyclins D1, D2, B, and proto-oncogene c-Myc , thus contributing to cancer progression (Ref. ). On the other hand, IL-10 immunosuppressive activity has been reported on macrophages and DCs, where it was found to dampen antigen presentation, cell maturation, and differentiation resulting in tumour immune evasion as shown in (Ref. ). Several studies have examined the role of IL-10 in different types of malignancies as listed in below. Previous studies highlighted a significant correlation between IL-10 and the percentage of plasma cells in multiple myeloma patients as it induces the proliferation of plasma cells (Refs – ). Other studies indicated an elevation of IL-10 in different haematological malignancies such as Hodgkin lymphoma and non-Hodgkin lymphoma (Refs , ). High IL-10 levels were reported to be associated with a shorter survival rate among patients with diffuse large-cell lymphoma (Ref. ). Similarly, high IL-10 levels was found to be a prognostic factor in peripheral T cell lymphoma, which can lead to worsening of overall survival, low complete response rate, and higher early relapse rate (Ref. ). Moreover, elevated IL-10 at diagnosis was found to be an independent prognostic marker in adult hemophagocytic lymphohistiocytosis patients in order to find the right treatment strategy (Ref. ). The riddle of IL-10 at the tumour-immune cell synapse The balance between pro-inflammatory and anti-inflammatory signals is generally crucial for the maintenance of normal physiology and the prevention of cancer and a wide variety of diseases (Refs , – ). In the context of IL-10, it plays a dual function acting either as a pro-inflammatory or an anti-inflammatory mediator (Ref. ). Regarding its role in cancer, studies have reported that IL-10, secreted by tumours or tumour-infiltrating immune cells, has allowed malignant cells to escape from the immune surveillance (Refs – ). In a study by Neven et al ., IL-10 knockout in mice promoted the development of colon cancer. Moreover, the same study showed that humans deficient in IL-10 signalling molecules were more prone to develop lymphomas at a younger age (Ref. ). As an anti-inflammatory cytokine, IL-10 is considered crucial for the homoeostasis of the anti-inflammatory Tregs and the suppression of proinflammatory IL-17-expressing T cells. However, IL-10 action depends on multiple factors such as targeted cells, other stimuli, and the time and duration of its effect (Ref. ). Though, with many rationales presented, a question mark continues to rise to explore the nature of this complex cytokine. IL-10 has also been causally linked to immunity in both the innate and adaptive immune arms. Different triggers have been shown to induce IL-10 production in various immune cells (Ref. ). The main source of IL-10 appears to be monocytes, and different T-cell subsets (Ref. ). Moreover, DCs, B cells, NK cells, mast cells, as well as neutrophils, and eosinophils can also synthesize IL-10 (Ref. ). During infection, macrophages are considered a major source of IL-10. Several toll-like receptors (TLRs), including TLR2, TLR4, TLR5, TLR7 and TLR9 have been shown to induce IL-10 production in macrophages and DCs (Refs – ). Also, IL-10 production in DCs is enhanced by the co-activation of TLR2 and Dectin-1 (Ref. ). Following exposure to IL-10, DCs can initiate the development of regulatory T cells (Tregs) that limit these effector responses (Refs , ). B cells also express several TLRs which have been shown to promote IL-10 production including TLR2, TLR4 or TLR9 (Refs – ). Nonetheless, it is also worth mentioning that IFN- α augments IL-10 production if combined with TLR agonists from B cells (Refs , ). Additionally, neutrophils produce IL-10 in response to TLR and C-type lectin co-activation through myeloid differentiation primary response 88 (MyD88) and spleen tyrosine kinase (SYK), respectively (Ref. ). The key producer of IL-10 is Treg cells that produce other immunoregulatory cytokines, such as TGF- β (Ref. ). The production and action of both cytokines IL-10 and TGF- β are involved in a positive feedback loop (Ref. ). Concerning the mechanism of IL-10 production from Tregs, it has been shown that IL-2 and IL-4 induce IL-10 production from Tregs (Refs – ). Additionally, a study concluded that TGF- β is required for the differentiation and production of IL-10 from Tregs (Ref. ). IL-2 and IL-27 are responsible for inducing IL-10 expression in cytotoxic CD8 + T cells (Ref. ). However, IL-12 and IL-23 prime CD8 + and CD4 + T cells for IL-10 production (Refs – ). Some studies reported IL-10 immunosuppressive effects such as inhibiting IFN- γ and TNF- α production by NK cells in-vitro (Ref. ). However, other studies reported IL-10 immunostimulatory effects via the promotion of NK cell cytotoxicity in preclinical models (Refs , ). Adding to the complexity of this master cytokine, one of the studies has shown that the exposure of malignant cells to IL-10 resulted in a reduction in their sensitivity to cytotoxic T cells but an increase in NK cell cytotoxicity (Ref. ). This might suggest that IL-10 contributes to fighting malignant cells by stimulating the immune innate arm (Ref. ). As mentioned earlier, one of the main drivers of IL-10 expression in many immune cells is TLR signalling (Ref. ). TLR ligation leads to the activation of several downstream pathways, including the mitogen-activated protein kinase (MAPK) pathway and the phosphoinositide 3-kinases (PI3K)/AKT pathways (Ref. ). Activation of the MAPK and downstream extracellular-signal-regulated kinase (ERK1 and ERK2) are critical for IL-10 production in macrophages and DCs in response to several TLR activators (Refs , , , ). The MAPK pathway eventually results in the activation of several transcription family members such as the activator protein-1 (AP-1) which activates IL-10 transcription (Refs , , , ). Moreover, ERK and p38 also contribute to IL-10 production in TLR-stimulated macrophages, monocytes, and DCs (Refs , – ). Both ERK and p38 may function cooperatively in their regulation of IL-10 production, through their joined activation of mitogen and stress-activated protein kinases (MSK1 and MSK2) which promote IL-10 production in TLR-stimulated macrophages. Downstream of MSK1 and MSK2 are the transcription factors, cAMP-response element binding protein (CREB), and AP-1, which also bind and transactivate the IL-10 promoter (Refs – ). Moreover, it is worth mentioning that both ERK and p38 were shown also to directly phosphorylate Sp1, one of the IL-10 transcription factors (Refs , ). The phosphatidylinositol-3-kinase (PI3K/AKT) pathway also contributes to IL-10 expression in myeloid cells either by antagonizing glycogen synthase kinase 3 beta (GSK3- β ), a constitutively active kinase that inhibits the production of IL-10 or through ERK and mammalian target of rapamycin (mTOR) and STAT-3 activation (Refs – ). IL-10 has been linked to many types of cancers such as gastric cancer (Ref. ), cervical cancer (Ref. ), lung cancer (Ref. ), breast cancer (Ref. ), colon adenocarcinoma (Ref. ), head and neck cancer (Ref. ), oesophageal cancer, nasopharyngeal cancer, oral cancer (Ref. ) and colorectal cancer (Ref. ). Its role in tumourigenesis is reported to be controversial where it could be a tumour suppressor or promoter. However, due to the complex nature of IL-10, its role in shaping the TME remains a gap that needs further research. Most of the literature is directed towards presenting the pro-tumoural activity of IL-10 in different oncological settings. This could be through the positive feedback loop with STAT-3, as IL-10 has been shown to activate STAT-3 resulting in the upregulation of B-cell lymphoma 2 (BCL-2) or B-cell lymphoma-extra-large (BCL-xL), and stimulation of cell proliferation by cyclins D1, D2, B, and proto-oncogene c-Myc , thus contributing to cancer progression (Ref. ). On the other hand, IL-10 immunosuppressive activity has been reported on macrophages and DCs, where it was found to dampen antigen presentation, cell maturation, and differentiation resulting in tumour immune evasion as shown in (Ref. ). Several studies have examined the role of IL-10 in different types of malignancies as listed in below. Previous studies highlighted a significant correlation between IL-10 and the percentage of plasma cells in multiple myeloma patients as it induces the proliferation of plasma cells (Refs – ). Other studies indicated an elevation of IL-10 in different haematological malignancies such as Hodgkin lymphoma and non-Hodgkin lymphoma (Refs , ). High IL-10 levels were reported to be associated with a shorter survival rate among patients with diffuse large-cell lymphoma (Ref. ). Similarly, high IL-10 levels was found to be a prognostic factor in peripheral T cell lymphoma, which can lead to worsening of overall survival, low complete response rate, and higher early relapse rate (Ref. ). Moreover, elevated IL-10 at diagnosis was found to be an independent prognostic marker in adult hemophagocytic lymphohistiocytosis patients in order to find the right treatment strategy (Ref. ). The balance between pro-inflammatory and anti-inflammatory signals is generally crucial for the maintenance of normal physiology and the prevention of cancer and a wide variety of diseases (Refs , – ). In the context of IL-10, it plays a dual function acting either as a pro-inflammatory or an anti-inflammatory mediator (Ref. ). Regarding its role in cancer, studies have reported that IL-10, secreted by tumours or tumour-infiltrating immune cells, has allowed malignant cells to escape from the immune surveillance (Refs – ). In a study by Neven et al ., IL-10 knockout in mice promoted the development of colon cancer. Moreover, the same study showed that humans deficient in IL-10 signalling molecules were more prone to develop lymphomas at a younger age (Ref. ). As an anti-inflammatory cytokine, IL-10 is considered crucial for the homoeostasis of the anti-inflammatory Tregs and the suppression of proinflammatory IL-17-expressing T cells. However, IL-10 action depends on multiple factors such as targeted cells, other stimuli, and the time and duration of its effect (Ref. ). Though, with many rationales presented, a question mark continues to rise to explore the nature of this complex cytokine. Controversial data exists regarding the effectiveness of IL-10 immunotherapy in cancer (Ref. ). Cancer vaccines that utilized monoclonal antibody (mAb) against IL-10 receptors succeeded to increase CD8 + T cell responses and to inhibit tumour growth whether injected intraperitoneally or subcutaneously (Refs , ). The beneficial effect of IL-10 blockade is best explained through the inhibition of IL-10-induced suppression of DCs and prevention of their antigen presentation capacity by decreasing the expression of MHC class II and co-stimulator molecules (Ref. ). Thus, DC-based vaccinations that disrupt IL-10 signalling provide more potent anti-tumour responses (Ref. ). On the contrary, others claimed that antibodies targeting IL-10R had no protective effect against tumour growth when used with vaccines containing adjuvants that do not induce IL-10, such as the TLR3 ligand poly (I: C) or anti-CD40 agonistic antibodies (Ref. ). Such a controversy regarding the effectiveness of therapeutic immunization could be explained and summarized by vaccine-induced IL-10 rather than IL-10 produced by tumours (Ref. ). It was previously reported that the prognosis of cancer patients is inversely correlated with elevated serum and tumour IL-10 levels (Ref. ). Despite that, exogenous administration of IL-10 was tested in clinical studies, and resulted in immunological activation, as evidenced by higher granzymes and IFN in the serum of those patients receiving treatment. Pegylated recombinant (PEG) murine IL-10 promoted rejection of tumours and metastases by enhancing CD8 + T cell-mediated immune responses (Ref. ). In addition, PEG-IL-10 exhibited immunologic and clinical advantages in solid tumours in clinical trials, particularly in RCC and uveal melanoma (Ref. ). CD8 + tumour-infiltrating lymphocytes (TILs) in metastatic melanoma co-upregulate IL-10R and PD-1. While PD-1 blockade or IL-10 neutralization as monotherapies were insufficient to produce anti-tumour activity, combination therapies of PD-L1 blockers with IL-10R blockers were shown to exert anti-tumour effects by enhancing T cell responses, thereby suppressing the tumour growth (Ref. ). Similarly, mice with ovarian tumours treated with PD-1 blocking antibodies have higher levels of IL-10 in their serum and ascites. Moreover, infiltration of immunosuppressive MDSCs was reduced, and the immunological activity was increased when IL-10 and PD-1 blockers were used together (Ref. ). On the other hand, a multi-centred trial involving 111 patients with advanced malignant solid tumours unresponsive to previous therapies revealed that anti-PD-1 treatment (pembrolizumab or nivolumab) in combination with PEG-IL-10 offered a new therapeutic option (Ref. ). Most of the immune cells express IL-10 receptors and can activate subsequent downstream signalling pathways. Therefore, the paradox underlying the IL-10 blockade and whether it carries a beneficial or detrimental role in cancer treatment might be deciphered if we understood how exactly these cells react to IL-10 signalling through comprehensive genomic, epigenomic, and proteomic analysis. Epigenetic regulations include DNA methylation, histone modifications, histone acetylation, and the action of non-coding RNAs (ncRNAs) (Refs – ). Epigenetics arising from an alteration in the chromatin usually leads to alterations in gene expression. Moreover, epigenetic changes could either activate or suppress an oncogene or a tumour suppressor gene (Refs – ). It has been recently revealed that IL-10 is highly epigenetically regulated (Refs , ). It is worth noting that such a level of post-transcriptional regulation of IL-10 expression might be a relevant explanation for the differential expression and effects of IL-10 in different cells at the TME despite the existence of common pathways for IL-10 induction as previously mentioned in this review, via the action of non-coding RNAs including microRNAs (miRNAs) (Refs , ), long non-coding RNAs (lncRNAs) (Refs – ), and circular RNAs (circRNAs) (Refs , , ). Epigenetic modulation of IL-10 on the post-transcriptional has been highly evident in several reports via DNA methylation, histone modifications and histone acetylation, which have been extensively studied before in several studies (Refs , ) and recently reviewed in (Ref. ). However, the epigenetic regulation of IL-10 via ncRNAs, miRNAs, lncRNAs and circRNAs is recently being explored. Therefore, a closer approach to exploring the epigenetic regulation of IL-10 via ncRNAs could aid in understanding the complex nature of this cytokine. microRNAs (miRNAs) regulating IL-10 miRNAs are short ncRNAs around 18–25 nucleotides long that widely exist in plants, viruses and animals (Refs , , , ). These miRNAs can regulate gene expression by either degrading the mRNA target or by suppressing mRNA translation and reducing mRNA stability by binding to the 3′UTR (untranslated region) of a target gene (Refs , ). Thus, a miRNA could therefore inhibit or activate the expression of tumour suppressors or oncogenes. Generally, oncogenic miRNAs (oncomiRs) are found to be over-expressed in cancers, whereas miRNAs with tumour-suppressive function are found to be under-expressed (Refs , , ). When these oncomiRs or tumour suppressor miRNAs are inhibited or stimulated, respectively, cancer cell metastasis, proliferation and survival may be reduced, depending on the specific miRNA being affected and the type of cancer (Refs , , ). Moreover, some cancers are dependent on specific oncomiRs, and suppressing such oncomiRs could completely regress cancer growth (Refs , , ). Few studies have presented miRNAs that could modulate IL-10 expression. In a study, testing for the possible post-transcriptional modulation of IL-10R α and IL-10R β expression by miRNAs, three miRNAs were shown to have seed regions that target the 3′UTR of IL-10R α ; miR-15a, miR-185 and miR-211. These miRNAs were shown to inhibit the proliferation of IL-10-treated melanoma cells, while their inhibitors caused an increase in cell proliferation in melanoma (Ref. ). IL-10 was also shown to be targeted by several other miRNAs (Ref. ). Another study showed that miR-106a could bind to the 3′UTR of IL-10 and significantly downregulate its expression in-vitro (Ref. ). Two transcription factors; early growth response 1 (Egr1) and Sp1 were implicated in the induction of miR-106a, which consequently reduced IL-10 levels (Ref. ). Furthermore, an inverse relation was reported between Egr1-stimulated miR-106a and IL-10 levels. It is also worth mentioning that miR-106a is part of a cluster that is known to be dysregulated in 46% of human T-cell leukaemias. Thus, it was deduced that the promotion of leukaemic cell survival by IL-10 might be through its modulation via miR-106a (Ref. ). Another miRNA reported to positively regulate IL-10 was miRNA-4661. The miR-4661 binding to the 3′UTR of IL-10 resulted in a net increase in the half-life of IL-10. This action was favoured by preventing tristetraprolin (TTP) from binding to the IL-10 mRNA (Ref. ). TTP is an RNA binding protein that plays a critical role in regulating proinflammatory immune responses by destabilizing target mRNAs via binding to their AU-rich elements (AREs) in the 3′-UTRs of mRNAs (Ref. ). Moreover, miRNA/IL-10 interactions were reported in a study by Liu et al . revealing that miR-98-mediated post-transcriptional control could potentially be involved in fine-tuning IL-10 production in endotoxin tolerance (Refs , ). On the other hand, IL-10 was reported to upregulate miRNAs that contribute towards an anti-inflammatory response such as miR-187 or downregulate those that are highly pro-inflammatory, such as miR-155 (Ref. ). IL-10 was able to downregulate the induction of miR-155 induced by LPS (Ref. ). Moreover, in-vivo studies on mice deficient in miR-155, could not generate a protective immune response (Ref. ). Whereas in IL-10 mice-deficient cells, miR-155 levels were shown to highly increase. It was previously known that miR-155 could target a number of genes involved in the immune response, such as suppressor of cytokine signalling (SOCS), inhibitor of NK- κ B kinase subunit epsilon (IKBKE) and Fas-associated death domain (FADD). Thus, targeting this miRNA by IL-10 is likely to elucidate key mechanisms through which IL-10 exerts control in the cell. Another study uncovered details of the IL-10 pathway by examining the effect of IL-10 on miRNAs, using IL-10 deficient mice for expression. Ten miRNAs were found to be upregulated in IL-10 deficient mice (miR-19a, miR-21, miR-31, miR-101, miR-223, miR-326, miR-142-3p, miR-142-5p, miR-146a and miR-155) (Ref. ). miR-223 could hinder Roquin ubiquitin ligase by binding to its 3′UTR, eventually regulating IL-17 production and its inhibitor IL-10. Thus, this suggested a mechanism by which IL-10 could modulate the expression of IL-17 through miR-223. As previously mentioned, IL-10 can also induce the expression of anti-inflammatory miRNAs, such as miR-198 which is known to suppress TNF- α and IL-6. Consequently, this resulted in the promotion of an anti-inflammatory environment (Ref. ). Collectively, such interesting findings of the mutual interaction between IL-10 and miRNAs discussed in the previous section highlighted an important role in the miRNA-mediated regulation of IL-10 expression and provided new insights into the intertwined mechanistic details of such immunomodulatory cytokine. LncRNAs regulating IL-10 Long transcripts of RNA having more than 200 nucleotides, and not involved in protein translation are regarded as lncRNAs (Refs , , ). LncRNAs play a significant role in the occurrence and development of cancer and thus, regulate the expression of cytokines such as IL-10 and IFN- γ as reported in a study by Tang et al . on non-small cell lung cancer (NSCLC) (Ref. ). A large number of lncRNAs has been associated with cancer as recognized by genome-wide association studies on numerous tumours (Ref. ). They are believed to exhibit functions such as tumour suppression and promotion, hence depicting to have a promising novel approach as biomarkers and therapeutic targets for cancers (Ref. ). An increased expression of lncRNA SNHGI in cancerous breast cells of CD4 + TILs was also reported, whereas the expression of FOX and IL-10 was seen to be greatly reduced by siRNA SNHGI (Ref. ). Moreover, silencing the lncRNA cox-2 was believed to increase the expression of IL-10, Arg-1 and Fizz-1 in M2 macrophages (Ref. ). A study conducted by Zhou et al . reported reduced expression of IL-10 via suppression of lnc-LINC00473 (Ref. ). Additionally, increased expression of IL-10 has been associated with the knockdown of lncRNA growth arrest-specific transcript 5 (GAS5) and reduced CRC cell proliferation while knockout of GAS5 promoted CRC colony formation and proliferation (Ref. ). LncRNAs are known to regulate various signalling pathways such as TGF- β , STAT3, Hippo, EGF, Wnt, PI3 K/AKT and p53, whilst IL-10 is mostly involved in T-cell immune surveillance and suppression of cancer-associated inflammation. The expression of interleukins is regulated by lncRNAs that are known to be involved in various types of cancer. For instance, previous work by our group highlighted the potential of miRNA and lncRNA in the regulation of IL-10 in breast cancer, where miR-17-5p was identified as a dual regulator of TNF- α and IL-10. Additionally, knocking down the lncRNAs MALAT1 and/or H19 induced miR-17-5p and decreased TNF- α and IL-10 expression levels (Ref. ). Such reports ed the immune-activator potential of miRNAs and the oncogenic potential of lncRNAs in cancers by regulating immunological targets in the TME. Hence, the extensive research on the relationship between the lncRNAs regulating IL-10 in various cancer needs to be validated further to establish a valid therapeutic link (Ref. ). CircRNAs regulating IL-10 CircRNAs are recognized as special ncRNA molecules with a distinctive ring structure and play significant roles as gene regulators and are considered one of the recently discovered epigenetic factors (Refs , ). Abnormal production of circRNAs was found to influence the onset, progression and metastasis of cancer by acting as either tumour-suppressive or oncogenic factors (Refs , – ). This happens via interactions with proteins, miRNA sponge function and posttranscriptional regulation (Refs , , ). Moreover, a line of evidence showed that circRNAs play pivotal roles in the chemoresistance (Refs , ). Recently, specific circRNAs were found to possess an immunomodulatory function and alter the response of the TME by regulating the functions of tumour-infiltrating immune cells. For instance, CD4 + T cells activity is enhanced by circ0005519 through promoting the expression of IL-13 and IL-6 via affecting the expression of hsa-let-7a-5p (Ref. ). On the other hand, circNT5C2 could attenuate the immune response by targeting miR-448 and serve as an oncogene via promoting tumour proliferation and metastasis (Ref. ). Since IL-10 function represents an unresolved enigma in cancer therapy, and since circRNAs also have dual roles in cancer therapy, the comprehensive understanding of circRNAs regulating IL-10 expression and function might be the key to answering numerous questions. Therefore, several studies that shed the light on novel circRNAs regulating IL-10 in different oncological and non-oncological contexts are highlighted. Some circRNAs can either enhance or inhibit IL-10 production and consequently could either promote or inhibit carcinogenesis. For example, circMERTK was reported to inhibit IL-10 production in colorectal cancer. The same study came to the conclusion that circMERTK knockdown reduced the activity of CD8 + T cells, suggesting that circMERTK may affect immunosuppressive activity through the circMERTK/miR-125a-3p/IL-10 axis (Ref. ). According to another in vitro study, the downregulation of secreted PD-L1 by non-small cell lung cancer cells upon knockdown of circCPA4 resulted in the activation of CD8 + T cells in the TME (Ref. ). In addition, the study found that PD-L1 abrogation reduced the expression of IL-10 in CD8 + T cells (Ref. ). Circ103516 expression was found to be inversely correlated with IL-10 in inflammatory bowel diseases and thus it was postulated to play a proinflammatory role by sponging miR-19b. Additionally, it was discovered that circRNA HECTD1 contributed to the development of acute ischaemic stroke and that it was inversely linked with IL-10 production, suggesting that IL-10 played a protective function in acute ischaemic stroke (Ref. ). In another cardiac context, the synthesis of IL-10 was decreased as a result of the overexpression of circFoxo3, a circRNA that is crucial in avoiding cardiac dysfunction brought on by myocardial infarction (Ref. ). Downregulation of circ00074854 was reported to prevent polarization of M2 macrophages, which consequently alleviated the invasion and migration of hepatocellular carcinoma cells. According to the same study, macrophages exposed to exosomes produced by HepG2 cells that contained lower amounts of circ00074854 had significantly lower levels of IL-10 than those exposed to exosomes produced by HepG2 cells, demonstrating the direct relationship between Circ00074854 and IL-10 in different cancer settings (Ref. ). Furthermore, a recent study emphasized the potential of CircSnx5 as a therapeutic target for immunological disorders since it has the ability to regulate the immunity and tolerance induced by DCs. It is interesting to note that knockdown of CircSnx5 led to a significant drop in IL-10, whilst overexpression of CircSnx5 was found to block DC maturation and boost IL-10 expression (Ref. ). Another study focused on Circ0001598 as a potential target for treating breast cancer. It was discovered that circ0001598 regulates miR-1184 and PD-L1 via significantly increasing breast cancer proliferation, chemo-resistance and escape from immune surveillance. According to the same study mentioned above, depletion of circ0001598 increased breast cancer cells' susceptibility to Tratuzumab-induced CD8 + T cell cytotoxicity while decreasing the production of IL-10 (Ref. ). Another study showed that the knockdown of circRNA PLCE1 ablated IL-10 production from macrophages while PLCE1 encouraged the transformation of epithelial cells into mesenchymal tissue, thus aiding glycolysis in colorectal cancer (Ref. ). Another recently identified circRNA; circZNF609 has been linked to the pathogenesis of coronary artery disease, and forced overexpression of circZNF609 resulted in augmenting IL-10 expression (Ref. ). It is also worth mentioning that a recent study discovered that circRNA NF1-419 attenuated inflammatory factors such as IL-10 and aging markers to postpone the onset of senile dementia (Ref. ). Also, circGFRA1 has been indicated as a potential therapeutic target in prostate cancer; where Meng et al . reported that through a reduction in IL-10, knocking down circGFRA1 lessens the tumourigenic and immune-evading characteristics of prostate cancer cells (Ref. ). Zhang et al . also discovered the role of circ0005075 in mediating neuroinflammation where silencing of circ0005075 in rat models resulted in a decrease in IL-10 production and protected against neuro-inflammation (Ref. ). Another in vitro study revealed that circCdr1 overexpression enhanced the transcription of IL-10 both in naïve and pro-inflammatory macrophages (Ref. ). CircCHST15 was recently reported to possess an oncogenic role by promoting immune escape through upregulating the expression of IL-10 and a sponging effect on miR-155 and miR-194 in lung cancer (Ref. ). Additionally, circ_0046523 was found to promote carcinogenesis, mediate immunosuppression and abrogate CD8 + T cells function in pancreatic cancer via enhancing the secretion of IL-10 and TGF- β (Ref. ). Furthermore, silencing circDNMT3B was discovered to decrease cell survival, promote apoptosis and increase IL-10 production in rat intestinal tissue (Ref. ). Collectively, it is quite clear that the circRNAs that inhibit IL-10 production from tumour cells act as tumour suppressors, while those that increase the production of IL-10 from tumour cells promote oncogenesis, cell survival, drug resistance and mediate immunosuppression. This highlights the promising role of such circRNAs as novel immunotherapeutic molecules that could ablate IL-10 production and act as a powerful immunomodulatory anti-cancer treatment for several cancer patients. miRNAs are short ncRNAs around 18–25 nucleotides long that widely exist in plants, viruses and animals (Refs , , , ). These miRNAs can regulate gene expression by either degrading the mRNA target or by suppressing mRNA translation and reducing mRNA stability by binding to the 3′UTR (untranslated region) of a target gene (Refs , ). Thus, a miRNA could therefore inhibit or activate the expression of tumour suppressors or oncogenes. Generally, oncogenic miRNAs (oncomiRs) are found to be over-expressed in cancers, whereas miRNAs with tumour-suppressive function are found to be under-expressed (Refs , , ). When these oncomiRs or tumour suppressor miRNAs are inhibited or stimulated, respectively, cancer cell metastasis, proliferation and survival may be reduced, depending on the specific miRNA being affected and the type of cancer (Refs , , ). Moreover, some cancers are dependent on specific oncomiRs, and suppressing such oncomiRs could completely regress cancer growth (Refs , , ). Few studies have presented miRNAs that could modulate IL-10 expression. In a study, testing for the possible post-transcriptional modulation of IL-10R α and IL-10R β expression by miRNAs, three miRNAs were shown to have seed regions that target the 3′UTR of IL-10R α ; miR-15a, miR-185 and miR-211. These miRNAs were shown to inhibit the proliferation of IL-10-treated melanoma cells, while their inhibitors caused an increase in cell proliferation in melanoma (Ref. ). IL-10 was also shown to be targeted by several other miRNAs (Ref. ). Another study showed that miR-106a could bind to the 3′UTR of IL-10 and significantly downregulate its expression in-vitro (Ref. ). Two transcription factors; early growth response 1 (Egr1) and Sp1 were implicated in the induction of miR-106a, which consequently reduced IL-10 levels (Ref. ). Furthermore, an inverse relation was reported between Egr1-stimulated miR-106a and IL-10 levels. It is also worth mentioning that miR-106a is part of a cluster that is known to be dysregulated in 46% of human T-cell leukaemias. Thus, it was deduced that the promotion of leukaemic cell survival by IL-10 might be through its modulation via miR-106a (Ref. ). Another miRNA reported to positively regulate IL-10 was miRNA-4661. The miR-4661 binding to the 3′UTR of IL-10 resulted in a net increase in the half-life of IL-10. This action was favoured by preventing tristetraprolin (TTP) from binding to the IL-10 mRNA (Ref. ). TTP is an RNA binding protein that plays a critical role in regulating proinflammatory immune responses by destabilizing target mRNAs via binding to their AU-rich elements (AREs) in the 3′-UTRs of mRNAs (Ref. ). Moreover, miRNA/IL-10 interactions were reported in a study by Liu et al . revealing that miR-98-mediated post-transcriptional control could potentially be involved in fine-tuning IL-10 production in endotoxin tolerance (Refs , ). On the other hand, IL-10 was reported to upregulate miRNAs that contribute towards an anti-inflammatory response such as miR-187 or downregulate those that are highly pro-inflammatory, such as miR-155 (Ref. ). IL-10 was able to downregulate the induction of miR-155 induced by LPS (Ref. ). Moreover, in-vivo studies on mice deficient in miR-155, could not generate a protective immune response (Ref. ). Whereas in IL-10 mice-deficient cells, miR-155 levels were shown to highly increase. It was previously known that miR-155 could target a number of genes involved in the immune response, such as suppressor of cytokine signalling (SOCS), inhibitor of NK- κ B kinase subunit epsilon (IKBKE) and Fas-associated death domain (FADD). Thus, targeting this miRNA by IL-10 is likely to elucidate key mechanisms through which IL-10 exerts control in the cell. Another study uncovered details of the IL-10 pathway by examining the effect of IL-10 on miRNAs, using IL-10 deficient mice for expression. Ten miRNAs were found to be upregulated in IL-10 deficient mice (miR-19a, miR-21, miR-31, miR-101, miR-223, miR-326, miR-142-3p, miR-142-5p, miR-146a and miR-155) (Ref. ). miR-223 could hinder Roquin ubiquitin ligase by binding to its 3′UTR, eventually regulating IL-17 production and its inhibitor IL-10. Thus, this suggested a mechanism by which IL-10 could modulate the expression of IL-17 through miR-223. As previously mentioned, IL-10 can also induce the expression of anti-inflammatory miRNAs, such as miR-198 which is known to suppress TNF- α and IL-6. Consequently, this resulted in the promotion of an anti-inflammatory environment (Ref. ). Collectively, such interesting findings of the mutual interaction between IL-10 and miRNAs discussed in the previous section highlighted an important role in the miRNA-mediated regulation of IL-10 expression and provided new insights into the intertwined mechanistic details of such immunomodulatory cytokine. Long transcripts of RNA having more than 200 nucleotides, and not involved in protein translation are regarded as lncRNAs (Refs , , ). LncRNAs play a significant role in the occurrence and development of cancer and thus, regulate the expression of cytokines such as IL-10 and IFN- γ as reported in a study by Tang et al . on non-small cell lung cancer (NSCLC) (Ref. ). A large number of lncRNAs has been associated with cancer as recognized by genome-wide association studies on numerous tumours (Ref. ). They are believed to exhibit functions such as tumour suppression and promotion, hence depicting to have a promising novel approach as biomarkers and therapeutic targets for cancers (Ref. ). An increased expression of lncRNA SNHGI in cancerous breast cells of CD4 + TILs was also reported, whereas the expression of FOX and IL-10 was seen to be greatly reduced by siRNA SNHGI (Ref. ). Moreover, silencing the lncRNA cox-2 was believed to increase the expression of IL-10, Arg-1 and Fizz-1 in M2 macrophages (Ref. ). A study conducted by Zhou et al . reported reduced expression of IL-10 via suppression of lnc-LINC00473 (Ref. ). Additionally, increased expression of IL-10 has been associated with the knockdown of lncRNA growth arrest-specific transcript 5 (GAS5) and reduced CRC cell proliferation while knockout of GAS5 promoted CRC colony formation and proliferation (Ref. ). LncRNAs are known to regulate various signalling pathways such as TGF- β , STAT3, Hippo, EGF, Wnt, PI3 K/AKT and p53, whilst IL-10 is mostly involved in T-cell immune surveillance and suppression of cancer-associated inflammation. The expression of interleukins is regulated by lncRNAs that are known to be involved in various types of cancer. For instance, previous work by our group highlighted the potential of miRNA and lncRNA in the regulation of IL-10 in breast cancer, where miR-17-5p was identified as a dual regulator of TNF- α and IL-10. Additionally, knocking down the lncRNAs MALAT1 and/or H19 induced miR-17-5p and decreased TNF- α and IL-10 expression levels (Ref. ). Such reports ed the immune-activator potential of miRNAs and the oncogenic potential of lncRNAs in cancers by regulating immunological targets in the TME. Hence, the extensive research on the relationship between the lncRNAs regulating IL-10 in various cancer needs to be validated further to establish a valid therapeutic link (Ref. ). CircRNAs are recognized as special ncRNA molecules with a distinctive ring structure and play significant roles as gene regulators and are considered one of the recently discovered epigenetic factors (Refs , ). Abnormal production of circRNAs was found to influence the onset, progression and metastasis of cancer by acting as either tumour-suppressive or oncogenic factors (Refs , – ). This happens via interactions with proteins, miRNA sponge function and posttranscriptional regulation (Refs , , ). Moreover, a line of evidence showed that circRNAs play pivotal roles in the chemoresistance (Refs , ). Recently, specific circRNAs were found to possess an immunomodulatory function and alter the response of the TME by regulating the functions of tumour-infiltrating immune cells. For instance, CD4 + T cells activity is enhanced by circ0005519 through promoting the expression of IL-13 and IL-6 via affecting the expression of hsa-let-7a-5p (Ref. ). On the other hand, circNT5C2 could attenuate the immune response by targeting miR-448 and serve as an oncogene via promoting tumour proliferation and metastasis (Ref. ). Since IL-10 function represents an unresolved enigma in cancer therapy, and since circRNAs also have dual roles in cancer therapy, the comprehensive understanding of circRNAs regulating IL-10 expression and function might be the key to answering numerous questions. Therefore, several studies that shed the light on novel circRNAs regulating IL-10 in different oncological and non-oncological contexts are highlighted. Some circRNAs can either enhance or inhibit IL-10 production and consequently could either promote or inhibit carcinogenesis. For example, circMERTK was reported to inhibit IL-10 production in colorectal cancer. The same study came to the conclusion that circMERTK knockdown reduced the activity of CD8 + T cells, suggesting that circMERTK may affect immunosuppressive activity through the circMERTK/miR-125a-3p/IL-10 axis (Ref. ). According to another in vitro study, the downregulation of secreted PD-L1 by non-small cell lung cancer cells upon knockdown of circCPA4 resulted in the activation of CD8 + T cells in the TME (Ref. ). In addition, the study found that PD-L1 abrogation reduced the expression of IL-10 in CD8 + T cells (Ref. ). Circ103516 expression was found to be inversely correlated with IL-10 in inflammatory bowel diseases and thus it was postulated to play a proinflammatory role by sponging miR-19b. Additionally, it was discovered that circRNA HECTD1 contributed to the development of acute ischaemic stroke and that it was inversely linked with IL-10 production, suggesting that IL-10 played a protective function in acute ischaemic stroke (Ref. ). In another cardiac context, the synthesis of IL-10 was decreased as a result of the overexpression of circFoxo3, a circRNA that is crucial in avoiding cardiac dysfunction brought on by myocardial infarction (Ref. ). Downregulation of circ00074854 was reported to prevent polarization of M2 macrophages, which consequently alleviated the invasion and migration of hepatocellular carcinoma cells. According to the same study, macrophages exposed to exosomes produced by HepG2 cells that contained lower amounts of circ00074854 had significantly lower levels of IL-10 than those exposed to exosomes produced by HepG2 cells, demonstrating the direct relationship between Circ00074854 and IL-10 in different cancer settings (Ref. ). Furthermore, a recent study emphasized the potential of CircSnx5 as a therapeutic target for immunological disorders since it has the ability to regulate the immunity and tolerance induced by DCs. It is interesting to note that knockdown of CircSnx5 led to a significant drop in IL-10, whilst overexpression of CircSnx5 was found to block DC maturation and boost IL-10 expression (Ref. ). Another study focused on Circ0001598 as a potential target for treating breast cancer. It was discovered that circ0001598 regulates miR-1184 and PD-L1 via significantly increasing breast cancer proliferation, chemo-resistance and escape from immune surveillance. According to the same study mentioned above, depletion of circ0001598 increased breast cancer cells' susceptibility to Tratuzumab-induced CD8 + T cell cytotoxicity while decreasing the production of IL-10 (Ref. ). Another study showed that the knockdown of circRNA PLCE1 ablated IL-10 production from macrophages while PLCE1 encouraged the transformation of epithelial cells into mesenchymal tissue, thus aiding glycolysis in colorectal cancer (Ref. ). Another recently identified circRNA; circZNF609 has been linked to the pathogenesis of coronary artery disease, and forced overexpression of circZNF609 resulted in augmenting IL-10 expression (Ref. ). It is also worth mentioning that a recent study discovered that circRNA NF1-419 attenuated inflammatory factors such as IL-10 and aging markers to postpone the onset of senile dementia (Ref. ). Also, circGFRA1 has been indicated as a potential therapeutic target in prostate cancer; where Meng et al . reported that through a reduction in IL-10, knocking down circGFRA1 lessens the tumourigenic and immune-evading characteristics of prostate cancer cells (Ref. ). Zhang et al . also discovered the role of circ0005075 in mediating neuroinflammation where silencing of circ0005075 in rat models resulted in a decrease in IL-10 production and protected against neuro-inflammation (Ref. ). Another in vitro study revealed that circCdr1 overexpression enhanced the transcription of IL-10 both in naïve and pro-inflammatory macrophages (Ref. ). CircCHST15 was recently reported to possess an oncogenic role by promoting immune escape through upregulating the expression of IL-10 and a sponging effect on miR-155 and miR-194 in lung cancer (Ref. ). Additionally, circ_0046523 was found to promote carcinogenesis, mediate immunosuppression and abrogate CD8 + T cells function in pancreatic cancer via enhancing the secretion of IL-10 and TGF- β (Ref. ). Furthermore, silencing circDNMT3B was discovered to decrease cell survival, promote apoptosis and increase IL-10 production in rat intestinal tissue (Ref. ). Collectively, it is quite clear that the circRNAs that inhibit IL-10 production from tumour cells act as tumour suppressors, while those that increase the production of IL-10 from tumour cells promote oncogenesis, cell survival, drug resistance and mediate immunosuppression. This highlights the promising role of such circRNAs as novel immunotherapeutic molecules that could ablate IL-10 production and act as a powerful immunomodulatory anti-cancer treatment for several cancer patients. IL-10 gene A very important basis for studies and research in IL-10 regulation is the examination of its genomic location and promoter structure. IL-10 gene encodes a protein, 178 amino acids long, which is secreted after cleavage to be comprised of 18 amino acids (Ref. ). At the proximal promoter sequence of IL-10 in the human genome, there is a TATA box located upstream of the translation start site, for several transcription family members, including nuclear factor- κ B (NF- κ B), STAT, specificity protein (Sp), CREB, CCATT enhancer/binding protein (C/EBP), c-musculoaponeurotic fibrosarcoma factor (c-MAF), which have been characterized as ‘critical’ factors in regulating IL-10 expression (Ref. ). IL-10 signalling Next, it is necessary to understand how IL-10 can signal through its receptor. IL-10R is a heterodimeric receptor complex composed of two chains (IL-10R α ‘R1’ and IL-10R β ‘R2’). The α -chain binds directly to IL-10, while the β -chain is subsequently recruited into the IL-10/IL-10R α complex (Ref. ). The binding of IL-10 to IL-10R α induces a conformational change in the receptor, allowing it to dimerize with IL-10R β . This dimerization leads to signal transduction in target cells (Ref. ). When the IL-10 complex is formed, tyrosine kinases Tyk2 and Jak1 become activated and phosphorylate specific tyrosine residues. This phosphorylation further activates the cytoplasmic inactive transcription factor; STAT-3 resulting in the translocation and transcriptional activation (Ref. ). IL-10 rapidly activates STAT-3 and remains phosphorylated over a sustained period, unlike the transient phosphorylation of IL-6 (Ref. ). The STAT-3 docking sites in IL-10R1 appear to be sufficient to induce IL-10-mediated proliferative responses (Ref. ). While IL-10R2 intracellular domain seems to provide the docking site for Tyk2. Thus, most IL-10-specific cellular functions appear to reside in the IL-10R1 chain, whereas IL-10R2 recruits the downstream signalling kinases (Ref. ). SNPs affecting IL-10 The IL-10 gene promoter and IL-10R have been found to include a significant number of SNPs (Refs , ). There is strong evidence that several of these polymorphisms are linked to the differential expression of IL-10 in vitro and in some situations, in vivo (Refs , , ). Some of these IL-10 variants have been associated with either low or high expression in several cancer types. For example, some genotypes have been evidenced to be correlated with a decreased expression of IL-10 and a higher risk to develop prostate cancer or non-Hodgkin's lymphoma (Refs , ). On the other hand, other evidence concluded that some IL-10 variants are associated with higher expression of IL-10 and consequently, an elevated risk for cancer development of multiple myeloma, cervical cancer and gastric cancer in patients harbouring a particular IL-10 variant (Refs – ). Also, it has been demonstrated that the IL-10 gene transcription and translation were impacted by the SNPs in the IL-10 promoter region, leading to aberrant cell division and emergence of breast cancer (Ref. ). summarizes most of the IL-10 polymorphisms documented in the literature and their association with cancer development and risk. Since IL-10 has a role in malignancy, it is regarded to be the subject of numerous disputes in the literature, whether it has a positive or negative effect. As a result, whether IL-10 blockage is effective as an immunotherapeutic strategy is another unsolved puzzle. This opens the door to a crucial query that might provide the answer. However, it has not yet been addressed in the literature. It remains unclear whether SNPs in the IL-10 or its receptor account for the varying effects of IL-10 inhibition on cancer treatment. A clinical investigation addressing the existence of SNPs in IL-10 or its receptors and their impact on the response to IL-10 therapy is necessary. These pharmacogenomic investigations will aid in the development of immunotherapeutic modalities by identifying the most qualified individuals to provide these cutting-edge drugs. A very important basis for studies and research in IL-10 regulation is the examination of its genomic location and promoter structure. IL-10 gene encodes a protein, 178 amino acids long, which is secreted after cleavage to be comprised of 18 amino acids (Ref. ). At the proximal promoter sequence of IL-10 in the human genome, there is a TATA box located upstream of the translation start site, for several transcription family members, including nuclear factor- κ B (NF- κ B), STAT, specificity protein (Sp), CREB, CCATT enhancer/binding protein (C/EBP), c-musculoaponeurotic fibrosarcoma factor (c-MAF), which have been characterized as ‘critical’ factors in regulating IL-10 expression (Ref. ). Next, it is necessary to understand how IL-10 can signal through its receptor. IL-10R is a heterodimeric receptor complex composed of two chains (IL-10R α ‘R1’ and IL-10R β ‘R2’). The α -chain binds directly to IL-10, while the β -chain is subsequently recruited into the IL-10/IL-10R α complex (Ref. ). The binding of IL-10 to IL-10R α induces a conformational change in the receptor, allowing it to dimerize with IL-10R β . This dimerization leads to signal transduction in target cells (Ref. ). When the IL-10 complex is formed, tyrosine kinases Tyk2 and Jak1 become activated and phosphorylate specific tyrosine residues. This phosphorylation further activates the cytoplasmic inactive transcription factor; STAT-3 resulting in the translocation and transcriptional activation (Ref. ). IL-10 rapidly activates STAT-3 and remains phosphorylated over a sustained period, unlike the transient phosphorylation of IL-6 (Ref. ). The STAT-3 docking sites in IL-10R1 appear to be sufficient to induce IL-10-mediated proliferative responses (Ref. ). While IL-10R2 intracellular domain seems to provide the docking site for Tyk2. Thus, most IL-10-specific cellular functions appear to reside in the IL-10R1 chain, whereas IL-10R2 recruits the downstream signalling kinases (Ref. ). The IL-10 gene promoter and IL-10R have been found to include a significant number of SNPs (Refs , ). There is strong evidence that several of these polymorphisms are linked to the differential expression of IL-10 in vitro and in some situations, in vivo (Refs , , ). Some of these IL-10 variants have been associated with either low or high expression in several cancer types. For example, some genotypes have been evidenced to be correlated with a decreased expression of IL-10 and a higher risk to develop prostate cancer or non-Hodgkin's lymphoma (Refs , ). On the other hand, other evidence concluded that some IL-10 variants are associated with higher expression of IL-10 and consequently, an elevated risk for cancer development of multiple myeloma, cervical cancer and gastric cancer in patients harbouring a particular IL-10 variant (Refs – ). Also, it has been demonstrated that the IL-10 gene transcription and translation were impacted by the SNPs in the IL-10 promoter region, leading to aberrant cell division and emergence of breast cancer (Ref. ). summarizes most of the IL-10 polymorphisms documented in the literature and their association with cancer development and risk. Since IL-10 has a role in malignancy, it is regarded to be the subject of numerous disputes in the literature, whether it has a positive or negative effect. As a result, whether IL-10 blockage is effective as an immunotherapeutic strategy is another unsolved puzzle. This opens the door to a crucial query that might provide the answer. However, it has not yet been addressed in the literature. It remains unclear whether SNPs in the IL-10 or its receptor account for the varying effects of IL-10 inhibition on cancer treatment. A clinical investigation addressing the existence of SNPs in IL-10 or its receptors and their impact on the response to IL-10 therapy is necessary. These pharmacogenomic investigations will aid in the development of immunotherapeutic modalities by identifying the most qualified individuals to provide these cutting-edge drugs. This review highlighted the controversial functions of IL-10 in oncology. Such contradictory information prevented researchers from determining whether exogenous IL-10 administration or blockage will boost the immune system and combat changes at the TME. This could be explained by the fact that IL-10 has two distinct functions depending on which immune cell and which receptor would be activated. Also, epigenetic regulation of IL-10 in cancer via ncRNAs is quite complex . Also, the relationship between IL-10 SNPs will help us better understand the precise function of IL-10 in the TME and will help us develop more individualized immunotherapeutic approaches by classifying patients into responders and non-responders. |
A Potential Indicator Gene, | a6fc6440-35ff-4ed7-bfe9-398a18ca4e13 | 11821766 | Microbiology[mh] | Description of sampling sites and sample collection Sites for sampling native greenhouse soils were selected based on the greenhouse location database of the Rural Development Administration (RDA) ( https://www.nongsaro.go.kr/portal/portalMain.ps?menuId=PS00001 ). At least five greenhouses were selected from each of the eight provinces in South Korea for the nationwide analysis ( , and ). According to the RDA database, each of the vegetable cultivation crops had been cultivated for 5–30 years. The majority of samples were collected in September 2022 and 2023, except for nine samples collected from Jeonbuk and Jeonnam in September 2021 and March 2022. Each soil sample was collected from a depth range of 5 to 15 cm and within a horizontal distance of 5 to 15 cm from the crop. Eleven provincial park soil samples and 14 mountain soil samples were collected as controls from sites located at least 10 kilometers away from greenhouses to ensure the minimal impact of anthropogenic activities. The selection of control soil sites was based on land use history and the absence of recent anthropogenic challenges ( e.g. , agricultural practices or chemical treatments). After their collection, soil samples were immediately placed in plastic bags, transferred to the laboratory under cool conditions, and stored at –20°C for further analyses. Measurement of environmental parameters of soil samples Soil chemical properties, including calcium (Ca 2+ ), magnesium (Mg 2+ ), ammonium (NH 4 + ), phosphate (PO 4 3– ), and nitrate (NO 3 – ) concentrations and pH, were assessed. Eight grams of each soil sample was placed into a 50-mL conical tube with 40 mL of sterilized distilled water. After an incubation for 1 h in a shaking incubator at 30°C, the pH of the sample was detected using the Orion Star TM A211 Benchtop pH Meter (Thermo Fisher). The portable ion analyzer, Rapid-d PIA-001 (Technell), was used to measure the concentrations of five ions (Ca 2+ , Mg 2+ , NH 4 + , PO 4 3– , and NO 3 – ) according to the manufacturer’s instructions. Briefly, 200 μL of the supernatant of the incubated mixture was transferred to each reagent provided by the company (Technell). After allowing the reaction to occur between the supernatant and reagent for 10 min, ion concentrations in the solution were measured using the analyzer. DNA extraction DNA was extracted from 0.5 g of soil using the DNeasy ® PowerSoil ® kit (Qiagen) in compliance with the manufacturer’s instructions. The concentration of DNA was measured using the Nabi UV/Vis NANO Spectrophotometer (Microdigital). High-throughput quantitative PCR The detection and quantification of 319 ARGs, 57 mobile genetic elements (MGEs), and 16S rDNA were conducted using the SmartChip Real-time PCR System (Wafergen Biosystems) . The concentrations of all DNA extracts were adjusted to 20–30 ng μL –1 with a total amount of 100 μL through the Nabi UV/Vis NANO Spectrophotometer (Microdigital). A total of 319 ARGs belonging to 11 classes, 57 MGEs, and 16S rRNA gene primers were selected . The volume of quantitative amplification was 100 nL and it contained 50 nL of 1× LightCycler 480 SYBR Green I Master Mix (Roche) (0.1 mg mL –1 ), 20 nL of a DNA template (approximately 5 ng μL –1 ), 500 nM of forward and reverse primers, and 19 nL of nuclease-free PCR-grade water. The Wafergen SmartChip Real-Time PCR Cycler loaded with the SmartChip was performed under the following protocol: initial denaturation at 95°C for 10 min, followed by 40 cycles of denaturation at 95°C for 30 s and annealing at 60°C for 30 s. The target gene copy number was calculated as follows: Copy number=10 (31–Ct)/(10/3) , where Ct is the threshold cycle and the relative abundance of ARGs was divided by the copy number of 16S rDNA for ARG per bacterial cell . A detection limit was set at a threshold cycle of 28 following the recommendations from Wafergen. The value of the threshold cycle was 31 and it was also used as the detection limit . Each sample was measured in triplicate. If all of the triplicates were not amplified, they were discarded . When the range of amplification efficiency was within 1.8–2.2, the Ct value was used in further analyses . Statistical analysis The total number of ARGs/MGEs and the relative abundance of ARGs were visualized by a boxplot with the R package “ggplot” (version 3.4.4) . The Shapiro test was conducted to verify the normality of samples. The significance of differences in soil properties between greenhouse and control soil samples was calculated using the Kruskal-Wallis test. Core genes were defined as ARGs and MGEs detected in >90% of samples. A heatmap for the relative abundance of core genes was generated using the R package “pheatmap” (version 1.0.12). Non-metric multidimensional scaling (NMDS) was performed on soil parameters using the Bray-Curtis dissimilarity metric, utilizing two axes to visualize data in the two-dimensional space. This analysis was conducted using the R package “vegan” (version 2.6–4). Spearman’s correlation between soil components and core ARGs/MGEs was calculated with the installation of “corrplot”, “tidyverse”, and “colorspace” . An assessment of the feature importance of core ARGs to examine their potential as genetic determinants of AR was conducted with random forest classification (RFC) ( https://github.com/scikit-learn/scikit-learn?tab=readme-ov-file ) based on the abundance of ARGs. Data were partitioned into two distinct data frames. The prediction process was iteratively performed with identifiers being randomly shuffled in each iteration. The average feature importance was computed, sorted, and the top 10 features were visualized in a bar chart. The decision-making tree model was constructed using the packages “party,” “caret,” tidyselect”, and “rpart” . Sites for sampling native greenhouse soils were selected based on the greenhouse location database of the Rural Development Administration (RDA) ( https://www.nongsaro.go.kr/portal/portalMain.ps?menuId=PS00001 ). At least five greenhouses were selected from each of the eight provinces in South Korea for the nationwide analysis ( , and ). According to the RDA database, each of the vegetable cultivation crops had been cultivated for 5–30 years. The majority of samples were collected in September 2022 and 2023, except for nine samples collected from Jeonbuk and Jeonnam in September 2021 and March 2022. Each soil sample was collected from a depth range of 5 to 15 cm and within a horizontal distance of 5 to 15 cm from the crop. Eleven provincial park soil samples and 14 mountain soil samples were collected as controls from sites located at least 10 kilometers away from greenhouses to ensure the minimal impact of anthropogenic activities. The selection of control soil sites was based on land use history and the absence of recent anthropogenic challenges ( e.g. , agricultural practices or chemical treatments). After their collection, soil samples were immediately placed in plastic bags, transferred to the laboratory under cool conditions, and stored at –20°C for further analyses. Soil chemical properties, including calcium (Ca 2+ ), magnesium (Mg 2+ ), ammonium (NH 4 + ), phosphate (PO 4 3– ), and nitrate (NO 3 – ) concentrations and pH, were assessed. Eight grams of each soil sample was placed into a 50-mL conical tube with 40 mL of sterilized distilled water. After an incubation for 1 h in a shaking incubator at 30°C, the pH of the sample was detected using the Orion Star TM A211 Benchtop pH Meter (Thermo Fisher). The portable ion analyzer, Rapid-d PIA-001 (Technell), was used to measure the concentrations of five ions (Ca 2+ , Mg 2+ , NH 4 + , PO 4 3– , and NO 3 – ) according to the manufacturer’s instructions. Briefly, 200 μL of the supernatant of the incubated mixture was transferred to each reagent provided by the company (Technell). After allowing the reaction to occur between the supernatant and reagent for 10 min, ion concentrations in the solution were measured using the analyzer. DNA was extracted from 0.5 g of soil using the DNeasy ® PowerSoil ® kit (Qiagen) in compliance with the manufacturer’s instructions. The concentration of DNA was measured using the Nabi UV/Vis NANO Spectrophotometer (Microdigital). The detection and quantification of 319 ARGs, 57 mobile genetic elements (MGEs), and 16S rDNA were conducted using the SmartChip Real-time PCR System (Wafergen Biosystems) . The concentrations of all DNA extracts were adjusted to 20–30 ng μL –1 with a total amount of 100 μL through the Nabi UV/Vis NANO Spectrophotometer (Microdigital). A total of 319 ARGs belonging to 11 classes, 57 MGEs, and 16S rRNA gene primers were selected . The volume of quantitative amplification was 100 nL and it contained 50 nL of 1× LightCycler 480 SYBR Green I Master Mix (Roche) (0.1 mg mL –1 ), 20 nL of a DNA template (approximately 5 ng μL –1 ), 500 nM of forward and reverse primers, and 19 nL of nuclease-free PCR-grade water. The Wafergen SmartChip Real-Time PCR Cycler loaded with the SmartChip was performed under the following protocol: initial denaturation at 95°C for 10 min, followed by 40 cycles of denaturation at 95°C for 30 s and annealing at 60°C for 30 s. The target gene copy number was calculated as follows: Copy number=10 (31–Ct)/(10/3) , where Ct is the threshold cycle and the relative abundance of ARGs was divided by the copy number of 16S rDNA for ARG per bacterial cell . A detection limit was set at a threshold cycle of 28 following the recommendations from Wafergen. The value of the threshold cycle was 31 and it was also used as the detection limit . Each sample was measured in triplicate. If all of the triplicates were not amplified, they were discarded . When the range of amplification efficiency was within 1.8–2.2, the Ct value was used in further analyses . The total number of ARGs/MGEs and the relative abundance of ARGs were visualized by a boxplot with the R package “ggplot” (version 3.4.4) . The Shapiro test was conducted to verify the normality of samples. The significance of differences in soil properties between greenhouse and control soil samples was calculated using the Kruskal-Wallis test. Core genes were defined as ARGs and MGEs detected in >90% of samples. A heatmap for the relative abundance of core genes was generated using the R package “pheatmap” (version 1.0.12). Non-metric multidimensional scaling (NMDS) was performed on soil parameters using the Bray-Curtis dissimilarity metric, utilizing two axes to visualize data in the two-dimensional space. This analysis was conducted using the R package “vegan” (version 2.6–4). Spearman’s correlation between soil components and core ARGs/MGEs was calculated with the installation of “corrplot”, “tidyverse”, and “colorspace” . An assessment of the feature importance of core ARGs to examine their potential as genetic determinants of AR was conducted with random forest classification (RFC) ( https://github.com/scikit-learn/scikit-learn?tab=readme-ov-file ) based on the abundance of ARGs. Data were partitioned into two distinct data frames. The prediction process was iteratively performed with identifiers being randomly shuffled in each iteration. The average feature importance was computed, sorted, and the top 10 features were visualized in a bar chart. The decision-making tree model was constructed using the packages “party,” “caret,” tidyselect”, and “rpart” . Physicochemical properties of greenhouse and control soils The concentrations of five ions (Ca 2+ , Mg 2+ , NH 4 + , NO 3 – , and PO 4 3– ) and pH are shown in . Average pH in control and greenhouse samples were 5.4 and 6.0, respectively. All ion concentrations were higher in greenhouse samples than in control samples. NO 3 – , NH 4 + , Ca 2+ , Mg 2+ , and PO 4 3– concentrations were 54-, 2-, 7-, 8-, and 4-fold higher, respectively, in greenhouse samples than in control samples. Two-dimensional NMDS using the Bray-Curtis dissimilarity of the soil properties of samples showed that greenhouse samples were more closely clustered together and significantly differed from control samples (Stress=0.987, Non-linear fit R 2 =0.987, P <0.001 between groups) . Soil chemical parameters and pH were significantly higher in greenhouse samples than in control samples (Kruskal-Wallis test, P <0.05), while no significant differences were observed in the chemical characteristics of soil among the provinces examined (the Kruskal-Wallis test, P =0.063). Abundance and diversity of ARGs and MGEs The relative abundance and diversity of ARGs in greenhouse and control samples are shown in . Control samples (mountain and provincial park samples) are in boxes with red dotted lines ( A and B). ARGs and MGEs were rarely detected in control samples (DWJ, JSJ, GCJ, DYJ, and YGJ), but were more abundant at various concentrations in all greenhouse samples ( A and B). In greenhouse samples, the highest abundance of ARGs was detected in JCC4, followed by GJC2 and ERG2, while the ISJ4 sample harbored the lowest abundance of ARGs. MGEs were dominant in GJC2, followed by GJG3 and WJJ1, but were rarely detected in ISJ4 and GWG1. Transposase-coding genes were the most abundant in GJC2, integrase-coding genes were frequently dominant in JJG, WJJ3, and WJJ1, and insertional sequences were the most frequently detected in GJG3. GJC2 had a greater abundance of ARGs and MGEs. Richness estimated by the number of ARGs and MGEs was significantly higher ( P <0.001) in greenhouse samples than in control samples ( A and B). The relative abundance of total ARGs and MGEs was also significantly higher ( P <0.001) in greenhouse samples than in control samples ( C and D). Discrimination of site types based on RFC and decision-making analyses The RFC analysis ranked str (aminoglycoside resistance gene), tetM (tetracycline resistance gene), and oqxA (a gene encoding efflux pump) as the top three discriminatory genes between greenhouse and control soils, followed by tetQ (tetracycline resistance gene), vanG (glycopeptide resistance gene), arsA (heavy metal resistance gene), cmr (chloramphenicol resistance gene), IS 5 (insertional sequence), tet40 (tetracycline resistance gene), and pica (MLSB resistance gene) based on their importance in discrimination ( A). The discriminatory power of ARGs was estimated using a decision-making tree model. Among the top 10 ARGs, tetM was selected to construct the decision-making tree ( B). Only two nodes were generated by the relative abundance of the tetM gene (either < or > 6.4×10 –5 target gene copies 16S rRNA gene copies –1 ). Analysis of abundance and correlation of candidate indicator genes with soil characteristics The relative abundance of 10 candidate indicator genes ( str , tetM , oqxA , tetQ , vanG , arsA , cmr , IS 5 , tet40 , and pica ) was compared between greenhouse and control samples . The abundance of eight candidate genes was significantly higher in greenhouse samples than in control samples ( P <0.001). Greenhouse samples had various levels of ARGs. The abundance of genes, such as tetM (0–0.02 target gene copies 16S rRNA gene copies –1 ), cmr (0–0.03 target gene copies 16S rRNA gene copies –1 ), and vanG (0–0.18 target gene copies 16S rRNA gene copies –1 ), was higher in greenhouse samples than in control samples. In contrast, the abundance of ARGs was generally lower in control samples than in greenhouse samples. The relative abundance of the selected ARGs in most of the control samples was below the limit of quantification. The relationships between eight candidate indicator genes and soil chemical components were examined using a correlation analysis . Correlations were observed between some genes, such as str , tetM , oqxA , and tetQ (r=0.84–0.97, P <0.05). Additionally, correlations were found between ARGs and specific soil nutrients, including Ca 2+ , Mg 2+ , and NO 3 – (r=0.44–0.68, P <0.05). PO 4 3– also correlated with tetM (r=0.45, P <0.05), vanG (r=0.47, P <0.05), and IS 5 (r=0.40, P <0.05), but not with the other candidate indicator genes. No correlation was observed between NH 4 + and candidate indicator genes ( P >0.05). The concentrations of five ions (Ca 2+ , Mg 2+ , NH 4 + , NO 3 – , and PO 4 3– ) and pH are shown in . Average pH in control and greenhouse samples were 5.4 and 6.0, respectively. All ion concentrations were higher in greenhouse samples than in control samples. NO 3 – , NH 4 + , Ca 2+ , Mg 2+ , and PO 4 3– concentrations were 54-, 2-, 7-, 8-, and 4-fold higher, respectively, in greenhouse samples than in control samples. Two-dimensional NMDS using the Bray-Curtis dissimilarity of the soil properties of samples showed that greenhouse samples were more closely clustered together and significantly differed from control samples (Stress=0.987, Non-linear fit R 2 =0.987, P <0.001 between groups) . Soil chemical parameters and pH were significantly higher in greenhouse samples than in control samples (Kruskal-Wallis test, P <0.05), while no significant differences were observed in the chemical characteristics of soil among the provinces examined (the Kruskal-Wallis test, P =0.063). The relative abundance and diversity of ARGs in greenhouse and control samples are shown in . Control samples (mountain and provincial park samples) are in boxes with red dotted lines ( A and B). ARGs and MGEs were rarely detected in control samples (DWJ, JSJ, GCJ, DYJ, and YGJ), but were more abundant at various concentrations in all greenhouse samples ( A and B). In greenhouse samples, the highest abundance of ARGs was detected in JCC4, followed by GJC2 and ERG2, while the ISJ4 sample harbored the lowest abundance of ARGs. MGEs were dominant in GJC2, followed by GJG3 and WJJ1, but were rarely detected in ISJ4 and GWG1. Transposase-coding genes were the most abundant in GJC2, integrase-coding genes were frequently dominant in JJG, WJJ3, and WJJ1, and insertional sequences were the most frequently detected in GJG3. GJC2 had a greater abundance of ARGs and MGEs. Richness estimated by the number of ARGs and MGEs was significantly higher ( P <0.001) in greenhouse samples than in control samples ( A and B). The relative abundance of total ARGs and MGEs was also significantly higher ( P <0.001) in greenhouse samples than in control samples ( C and D). The RFC analysis ranked str (aminoglycoside resistance gene), tetM (tetracycline resistance gene), and oqxA (a gene encoding efflux pump) as the top three discriminatory genes between greenhouse and control soils, followed by tetQ (tetracycline resistance gene), vanG (glycopeptide resistance gene), arsA (heavy metal resistance gene), cmr (chloramphenicol resistance gene), IS 5 (insertional sequence), tet40 (tetracycline resistance gene), and pica (MLSB resistance gene) based on their importance in discrimination ( A). The discriminatory power of ARGs was estimated using a decision-making tree model. Among the top 10 ARGs, tetM was selected to construct the decision-making tree ( B). Only two nodes were generated by the relative abundance of the tetM gene (either < or > 6.4×10 –5 target gene copies 16S rRNA gene copies –1 ). The relative abundance of 10 candidate indicator genes ( str , tetM , oqxA , tetQ , vanG , arsA , cmr , IS 5 , tet40 , and pica ) was compared between greenhouse and control samples . The abundance of eight candidate genes was significantly higher in greenhouse samples than in control samples ( P <0.001). Greenhouse samples had various levels of ARGs. The abundance of genes, such as tetM (0–0.02 target gene copies 16S rRNA gene copies –1 ), cmr (0–0.03 target gene copies 16S rRNA gene copies –1 ), and vanG (0–0.18 target gene copies 16S rRNA gene copies –1 ), was higher in greenhouse samples than in control samples. In contrast, the abundance of ARGs was generally lower in control samples than in greenhouse samples. The relative abundance of the selected ARGs in most of the control samples was below the limit of quantification. The relationships between eight candidate indicator genes and soil chemical components were examined using a correlation analysis . Correlations were observed between some genes, such as str , tetM , oqxA , and tetQ (r=0.84–0.97, P <0.05). Additionally, correlations were found between ARGs and specific soil nutrients, including Ca 2+ , Mg 2+ , and NO 3 – (r=0.44–0.68, P <0.05). PO 4 3– also correlated with tetM (r=0.45, P <0.05), vanG (r=0.47, P <0.05), and IS 5 (r=0.40, P <0.05), but not with the other candidate indicator genes. No correlation was observed between NH 4 + and candidate indicator genes ( P >0.05). Impact of agricultural activities on the composition of greenhouse soil Greenhouse cultivation has been developed to meet the growing demand for food production . However, this method frequently involves the input of manure, fertilizers, and irrigation water in limited areas, resulting in dynamic changes in the chemical components of soil . The present study clearly showed that soil characteristics significantly differed between greenhouse and control samples in September 2022 and 2023, which is consistent with previous findings . pH was higher in greenhouse samples than in control samples, indicating potential changes in soil chemistry attributable to greenhouse agricultural activities. Furthermore, the analysis of five key ions, Ca 2+ , Mg 2+ , NH 4 + , NO 3 – , and PO 4 3– , revealed differences between greenhouse and control samples, with significantly higher concentrations being observed in the former, particularly NO 3 – . These results suggest that intensive and various agricultural activities, including the application of fertilizers, affect soil nutrient dynamics in the closed greenhouse system and promote the proliferation of ARGs and ARB . Abundance of ARGs and MGEs in the agricultural environment The assessment of ARG abundance in greenhouse and control soils represents a critical aspect of the present study, providing insights into the prevalence and distribution of ARGs in agricultural environments. The comparative analysis revealed that the relative abundance of ARGs significantly varied between greenhouse and control samples, with the diversity and abundance of ARGs being higher in the former across multiple provinces in South Korea. These results are consistent with previous findings showing an increased abundance of ARGs in agricultural soils subjected to intensive farming practices . It is important to note that local conventional practices to maintain the quality of soil in greenhouses involve the continuous replacement of soil with that outside the greenhouse. Therefore, indigenous greenhouse bacteria, which may be considered stable and competitive to thrive in local soils, acquire diverse ARGs through intensive agricultural activities in the greenhouse and are stable sources for the dissemination of ARGs through local practices for soil replacement in greenhouses . The application of manure has been reported to increase resistance genes to aminoglycoside, tetracycline, and sulfonamide in the agricultural environment . Identification of candidate indicator genes The need for an indicator gene for an AR contamination assessment is important for understanding the extent of ARG dissemination in various environments, particularly in agricultural soils. While conventional methods for measuring antibiotic concentrations, such as mass spectrometry, may not provide direct insights into the actual abundance of ARGs, the use of an indicator gene may be a practical solution for establishing a management policy on AR contamination in agricultural environments. The present results suggest that some ARGs, including core genes, act as candidate indicators in the agricultural environment, with a few overlapping with indicators in previous studies . Four genes, intI1 , sul1 , sul2 , and tetM , were previously proposed as AR indicator genes in manure-amended soil due to their low degradation rate and significant abundance . Furthermore, seven genes, intI1 , intI2 , ermB , ermC , qepA , qnrA , and qnrS , were suggested as genetic probes for AR contamination . We prioritized the necessity of identifying proper AR indicators in greenhouse facilities. The identification of indicator ARGs or MGEs needs to meet two requirements: i) clinical significance and ii) a relationship with MGEs for HGT . Among the candidate indicator genes detected in the present study, the aminoglycoside resistance genes str , strA , aadA5 , and aac(6')-IIc may be transmitted between clinical and environmental strains equipped in the genetic cassettes of intl-1 encoding integrons . The genes ermB , qnrS , and tetM in this study satisfied the requirements of clinical relevance and transferability . While they were selected as indicator genes, including aadA5 for AR, in various environmental settings , these genes were not identified as indicators in the agricultural environment by RFC and the decision-making model. In this study, tetM met the requirements of clinical relevance and a relationship with MGEs based on previous studies as well as its prevalence in the soil resistome in the present study. Therefore, tetM was selected as a discriminatory indicator ARG in greenhouses. In the future, we intend to investigate the relationship between the concentrations of applied and persistent agricultural antibiotics and the amount of tetM in agricultural soil in South Korea. The present study provides important insights for the identification of the potential indicator tetM gene, which may facilitate the effective management of AR contamination in agricultural environments. Relationship between candidate indicator genes and anthropogenic activity Implications of candidate indicator genes in agricultural settings Eight candidate indicator genes were selected, except for a heavy metal resistance gene ( arsA ) and efflux pump gene ( oqx A ). The identification of the agriculturally relevant ARGs, str , tetD , tetM , tetQ , and tet40 implies that anthropogenic activities, including the application of oxytetracycline and streptomycin, has increased the abundance of specific ARGs. Streptomycin resistance genes, such as str and strA , are frequently found in agricultural settings due to the extensive use of streptomycin in both plant and animal agriculture for disease control . In addition, the tetracycline resistance genes, tet40 , tetD , tetM , and tetQ are prevalent in agricultural environments, which is largely due to the widespread use of tetracycline antibiotics in veterinary medicine and growth promoters in livestock production . These tetracycline resistance genes appear to be derived from anthropogenic sources ( e.g. , manure application and the usage of antibiotics). Therefore, the presence of streptomycin and tetracycline resistance genes, particularly tetM as a powerful discriminatory indicator gene, may facilitate the selection and dissemination of resistance to human pathogens, which is of significant importance to public health. Representatives of anthropogenic activities The observed increases in soil nutrients, such as NH 4+ , NO 3– , and PO 4 3– , due to anthropogenic activities, including fertilizer application, may contribute to the proliferation of ARGs in agricultural settings . Previous studies showed that high concentrations of these ions correlated with the increased abundance of ARGs, such as ermB and tetM , which was facilitated through mechanisms such as the conjugative transposon Tn916 under tetracycline exposure . Agricultural practices involving the application manure and fertilizers may amplify specific ARGs, such as tetM in the present study. However, the results obtained herein do not conclusively establish the direct impact of anthropogenic activities on ARG proliferation. Limitations of the present study It is important to note that information on irrigation and the application of manure or fertilizers was not available for this study. Due to the established impact of manure on the soil resistome , the lack of data on manure application in the present study may limit our ability to fully understand the sources and drivers of ARGs in the soils examined. Future research that includes detailed information on manure and fertilizer usage is needed to better elucidate their impact on the presence and abundance of ARGs in different agricultural settings. Greenhouse cultivation has been developed to meet the growing demand for food production . However, this method frequently involves the input of manure, fertilizers, and irrigation water in limited areas, resulting in dynamic changes in the chemical components of soil . The present study clearly showed that soil characteristics significantly differed between greenhouse and control samples in September 2022 and 2023, which is consistent with previous findings . pH was higher in greenhouse samples than in control samples, indicating potential changes in soil chemistry attributable to greenhouse agricultural activities. Furthermore, the analysis of five key ions, Ca 2+ , Mg 2+ , NH 4 + , NO 3 – , and PO 4 3– , revealed differences between greenhouse and control samples, with significantly higher concentrations being observed in the former, particularly NO 3 – . These results suggest that intensive and various agricultural activities, including the application of fertilizers, affect soil nutrient dynamics in the closed greenhouse system and promote the proliferation of ARGs and ARB . The assessment of ARG abundance in greenhouse and control soils represents a critical aspect of the present study, providing insights into the prevalence and distribution of ARGs in agricultural environments. The comparative analysis revealed that the relative abundance of ARGs significantly varied between greenhouse and control samples, with the diversity and abundance of ARGs being higher in the former across multiple provinces in South Korea. These results are consistent with previous findings showing an increased abundance of ARGs in agricultural soils subjected to intensive farming practices . It is important to note that local conventional practices to maintain the quality of soil in greenhouses involve the continuous replacement of soil with that outside the greenhouse. Therefore, indigenous greenhouse bacteria, which may be considered stable and competitive to thrive in local soils, acquire diverse ARGs through intensive agricultural activities in the greenhouse and are stable sources for the dissemination of ARGs through local practices for soil replacement in greenhouses . The application of manure has been reported to increase resistance genes to aminoglycoside, tetracycline, and sulfonamide in the agricultural environment . The need for an indicator gene for an AR contamination assessment is important for understanding the extent of ARG dissemination in various environments, particularly in agricultural soils. While conventional methods for measuring antibiotic concentrations, such as mass spectrometry, may not provide direct insights into the actual abundance of ARGs, the use of an indicator gene may be a practical solution for establishing a management policy on AR contamination in agricultural environments. The present results suggest that some ARGs, including core genes, act as candidate indicators in the agricultural environment, with a few overlapping with indicators in previous studies . Four genes, intI1 , sul1 , sul2 , and tetM , were previously proposed as AR indicator genes in manure-amended soil due to their low degradation rate and significant abundance . Furthermore, seven genes, intI1 , intI2 , ermB , ermC , qepA , qnrA , and qnrS , were suggested as genetic probes for AR contamination . We prioritized the necessity of identifying proper AR indicators in greenhouse facilities. The identification of indicator ARGs or MGEs needs to meet two requirements: i) clinical significance and ii) a relationship with MGEs for HGT . Among the candidate indicator genes detected in the present study, the aminoglycoside resistance genes str , strA , aadA5 , and aac(6')-IIc may be transmitted between clinical and environmental strains equipped in the genetic cassettes of intl-1 encoding integrons . The genes ermB , qnrS , and tetM in this study satisfied the requirements of clinical relevance and transferability . While they were selected as indicator genes, including aadA5 for AR, in various environmental settings , these genes were not identified as indicators in the agricultural environment by RFC and the decision-making model. In this study, tetM met the requirements of clinical relevance and a relationship with MGEs based on previous studies as well as its prevalence in the soil resistome in the present study. Therefore, tetM was selected as a discriminatory indicator ARG in greenhouses. In the future, we intend to investigate the relationship between the concentrations of applied and persistent agricultural antibiotics and the amount of tetM in agricultural soil in South Korea. The present study provides important insights for the identification of the potential indicator tetM gene, which may facilitate the effective management of AR contamination in agricultural environments. Implications of candidate indicator genes in agricultural settings Eight candidate indicator genes were selected, except for a heavy metal resistance gene ( arsA ) and efflux pump gene ( oqx A ). The identification of the agriculturally relevant ARGs, str , tetD , tetM , tetQ , and tet40 implies that anthropogenic activities, including the application of oxytetracycline and streptomycin, has increased the abundance of specific ARGs. Streptomycin resistance genes, such as str and strA , are frequently found in agricultural settings due to the extensive use of streptomycin in both plant and animal agriculture for disease control . In addition, the tetracycline resistance genes, tet40 , tetD , tetM , and tetQ are prevalent in agricultural environments, which is largely due to the widespread use of tetracycline antibiotics in veterinary medicine and growth promoters in livestock production . These tetracycline resistance genes appear to be derived from anthropogenic sources ( e.g. , manure application and the usage of antibiotics). Therefore, the presence of streptomycin and tetracycline resistance genes, particularly tetM as a powerful discriminatory indicator gene, may facilitate the selection and dissemination of resistance to human pathogens, which is of significant importance to public health. Representatives of anthropogenic activities The observed increases in soil nutrients, such as NH 4+ , NO 3– , and PO 4 3– , due to anthropogenic activities, including fertilizer application, may contribute to the proliferation of ARGs in agricultural settings . Previous studies showed that high concentrations of these ions correlated with the increased abundance of ARGs, such as ermB and tetM , which was facilitated through mechanisms such as the conjugative transposon Tn916 under tetracycline exposure . Agricultural practices involving the application manure and fertilizers may amplify specific ARGs, such as tetM in the present study. However, the results obtained herein do not conclusively establish the direct impact of anthropogenic activities on ARG proliferation. Eight candidate indicator genes were selected, except for a heavy metal resistance gene ( arsA ) and efflux pump gene ( oqx A ). The identification of the agriculturally relevant ARGs, str , tetD , tetM , tetQ , and tet40 implies that anthropogenic activities, including the application of oxytetracycline and streptomycin, has increased the abundance of specific ARGs. Streptomycin resistance genes, such as str and strA , are frequently found in agricultural settings due to the extensive use of streptomycin in both plant and animal agriculture for disease control . In addition, the tetracycline resistance genes, tet40 , tetD , tetM , and tetQ are prevalent in agricultural environments, which is largely due to the widespread use of tetracycline antibiotics in veterinary medicine and growth promoters in livestock production . These tetracycline resistance genes appear to be derived from anthropogenic sources ( e.g. , manure application and the usage of antibiotics). Therefore, the presence of streptomycin and tetracycline resistance genes, particularly tetM as a powerful discriminatory indicator gene, may facilitate the selection and dissemination of resistance to human pathogens, which is of significant importance to public health. The observed increases in soil nutrients, such as NH 4+ , NO 3– , and PO 4 3– , due to anthropogenic activities, including fertilizer application, may contribute to the proliferation of ARGs in agricultural settings . Previous studies showed that high concentrations of these ions correlated with the increased abundance of ARGs, such as ermB and tetM , which was facilitated through mechanisms such as the conjugative transposon Tn916 under tetracycline exposure . Agricultural practices involving the application manure and fertilizers may amplify specific ARGs, such as tetM in the present study. However, the results obtained herein do not conclusively establish the direct impact of anthropogenic activities on ARG proliferation. It is important to note that information on irrigation and the application of manure or fertilizers was not available for this study. Due to the established impact of manure on the soil resistome , the lack of data on manure application in the present study may limit our ability to fully understand the sources and drivers of ARGs in the soils examined. Future research that includes detailed information on manure and fertilizer usage is needed to better elucidate their impact on the presence and abundance of ARGs in different agricultural settings. The present study elucidated the significant impact of agricultural activities on soil compositions and ARG abundance in greenhouse environments. Agricultural activities, including the use of antibiotics, application of manure, and irrigation, drive the proliferation of ARGs and MGEs in the agricultural environment. We identified the tetM gene as a potential indicator gene for assessing AR contamination in greenhouse soils based on its ubiquitous presence, transferability, relationship with nutrients driven by anthropogenic activities, and discriminatory power. The recognition of tetM as an indicator gene provides valuable insights for the consistent monitoring and management of AR contamination in agricultural ecosystems, which will mitigate the spread of ARGs and preserve environmental health. This study will contribute to advanced strategies for ongoing surveillance efforts to combat antibiotic resistance in agricultural settings. Han, S., Shin, R., Ryu, S.-H., Unno, T., Hur, H.-G., and Shin, H. (2024) A Potential Indicator Gene, tetM , to Assess Contamination by Antibiotic Resistance Genes in Greenhouses in South Korea. Microbes Environ 39 : ME24053. https://doi.org/10.1264/jsme2.ME24053 Supplementary Material |
The Molecular Mechanisms and Therapeutic Targets of Atherosclerosis: From Basic Research to Interventional Cardiology | c9b08d34-d7a0-497c-9abf-4d8fe89e23f6 | 11084326 | Internal Medicine[mh] | |
Suggestion of creatine as a new neurotransmitter by approaches ranging from chemical analysis and biochemistry to electrophysiology | 41fe6630-85bd-4674-af4b-7c8ca9b3e26e | 10735228 | Physiology[mh] | Neural signaling depends on chemical transmission between neurons and their target cells . Neurotransmission depends on chemicals such as neurotransmitters, neuromodulators, and neuropeptides. Decades of work, sometimes with convoluted paths, were involved before a molecule was established as a classic neurotransmitter . Initial hints about cholinergic signaling were obtained in the 1800s . Choline and acetylcholine (ACh) were discovered decades before their pharmacological effects were found around the turn of 20th century . Henry Dale and colleagues realized similarities of ACh and parasympathetic stimulation , but it was not until 1929 when ACh was detected in the body and 1934 when ACh was proven a neurotransmitter in the peripheral nervous system (PNS) . It took nearly 100 y from the finding of the effects of supradrenal gland damage or removal , the observation of an activity in the supradrenal gland , the isolation of an inactive derivative , and the successful isolation of adrenaline , the notice of similarities between adrenaline and sympathetic stimulation , to the mid-1940s when Ulf von Euler proved that noradrenaline (NA) was the neurotransmitter of the sympathetic nerves . While it is not easy to establish a molecule as a neurotransmitter in the PNS, it is even harder to establish a central nervous system (CNS) neurotransmitter. Three decades elapsed between the time when ACh was proven to be a PNS neurotransmitter and the time when it was established as a CNS neurotransmitter and two decades between NA as a peripheral transmitter and a central transmitter . If a neurotransmitter acts only in the CNS, but not in the PNS, it is much more difficult to discover or to prove. Most neurotransmitters were discovered for their effects on peripheral tissues, with muscle contraction or relaxation as a major readout. Glutamate (Glu) and gamma-aminobutyric acid (GABA) were discovered partly because of their peripheral effects and partly because of their effects on spinal neurons. There is no reason for central neurotransmitters to also act peripherally, but relatively little efforts have been reported to find small-molecule neurotransmitters acting only on CNS neurons with no peripheral bioassays available. Premature assumptions and technical difficulties are among the major reasons why the hunt for neurotransmitters has not been a highly active area of research over the last three decades. Are there more neurotransmitters and how can they be discovered? Classic neurotransmitters are stored in synaptic vesicles (SVs) . They are released upon electric stimulation before being degraded enzymatically or taken up into the presynaptic terminal by cytoplasmic transporters and into SVs by vesicular transporters . Most of the major textbooks list either three or four criteria of a neurotransmitter: presence in presynaptic neurons, release upon stimulation, action on postsynaptic neurons, mechanism of removal. Some molecules commonly accepted as neurotransmitters still do not meet all the criteria listed in different textbooks, but they nonetheless play important functional roles in the CNS and their defects cause human diseases. Over time, different small molecules have been proposed to function as neurotransmitters (e.g., ; ), but none satisfies all the criteria. Robust and reliable detection of the candidate molecule in SVs is often, though not always, the problem (cf. ). Beginning in 2011, we have been actively searching for new neurotransmitters in the mammalian brain. We have tried different approaches, including searching for neuroactive substances in the cerebral spinal fluid (CSF) and following transporters potentially localized in the SVs. One approach that we have now taken to fruition is the purification of the SVs from mouse brains coupled with chemical analysis of their contents. We have found known transmitters such as Glu, GABA, ACh, and 5-hydroxytryptamine (5-HT). But more importantly, we have reproducibly detected creatine (Cr) in SVs. Cr was discovered in 1832 by Michel-Eugène Chevreul and has long been considered as an energy buffer in the muscle and the brain . Half of Cr in a mammalian animal is thought to come from diet and the rest from endogenous synthesis . Most of the Cr is present in the muscle, but it is also present in the brain. Although most of the endogenous Cr is synthesized in the kidney, the pancreas, and the liver , Cr is also synthesized in the brain . Solute carriers (SLC) contribute to both cytoplasmic and vesicular transporters. With 19 members in humans, family 6 (SLC6) are secondary active transporters relying on electrochemical Na + or H + gradients . SLC6 is also known as the neurotransmitter transporter (NTT) family because some members transport neurotransmitters such as GABA (by SLC6A1 or GABA transporter 1, GAT1; SLC6A13 or GAT2; SLC6A11 or GAT3) , NA (by SLC6A2 or NA transporter [NET]) , dopamine (by SLC6A3 or DAT) , 5-HT (by SLC6A4 or serotonin transporter [SERT]) , and glycine (by SLC6A9, or GlyT1; SLC6A5 or GlyT2) . Cr is transported by SLC6A8 (also known as CrT, CT1, or CRCT) . In addition to peripheral organs and tissues, SLC6A8 is also expressed in the nervous system where it is mainly in neurons . SLC6A8 protein could be found on the plasma membrane of neurons . The functional significance of SLC6A8 in the brain is supported by symptoms of humans defective in SLC6A8. Mutations in SLC6A8 were found in human patients with intellectual disability (ID), delayed language development, epileptic seizures, and autistic-like behaviors . They are collectedly known as Cr transporter deficiency (CTD), with ID as the hallmark. Particular vulnerability of language development has been observed in some SLC6A8 mutations which had mild ID but severe language delay . CTD contributes to approximately 1–2.1% of X-linked mental retardation . While CTD is highly prevalent in ID males, it is also present in females, with an estimated carrier frequency of 0.024% . Slc6a8 knockout mice showed typical symptoms of human CTD patients with early and progressive impairment in learning and memory. Mice with brain- and neuronal-specific knockout of Slc6a8 showed deficits in learning and memory without changes in locomotion caused by peripheral involvement of Slc6a8 . Deletion of Slc6a8 from dopaminergic neurons in the brain caused hyperactivity . These results demonstrate that SLC6A8 is functionally important in neurons. Cr deficiency syndromes (CDS) are inborn errors of Cr metabolism, which can result from defects in one of the three genes: guanidinoacetate methyltransferase ( GAMT ) , arginine-glycine amidinotransferase ( AGAT ) , and SLC6A8 . That they all show brain disorders indicates the functional importance of Cr in the brain . Here we first biochemically purified SVs from the mouse brain and discovered the presence of Cr, as well as classic neurotransmitters Glu and GABA, ACh and 5-HT, in SVs. We then detected calcium (Ca 2+ )-dependent releases of Cr, Glu, and GABA but not ACh and 5-HT when neurons were depolarized by increased extracellular concentrations of potassium (K + ). Both the level of Cr in SVs and that of Cr released upon stimulation were decreased significantly when either the gene for Slc6a8 or the gene for Agat were eliminated genetically. When Cr was applied to slices from the neocortex, the activities of pyramidal neurons were inhibited. Furthermore, we confirmed that Cr was taken up by synaptosomes and found that Cr uptake was significantly reduced when the Slc6a8 gene was deleted. Finally, we found that Cr was transported into SVs. Thus, multidisciplinary studies with biochemistry, genetics, and electrophysiology have suggested that Cr is a new neurotransmitter, though the discovery of a receptor for Cr would prove it. Detection of Cr in SVs from the mouse brain To search for new neurotransmitters, we tried several approaches. For example, we used Ca 2+ imaging to detect neuroactive substances in the cerebrospinal fluid (CSF), but it was difficult to rule out existing neurotransmitters and select responses from potentially new neurotransmitters. We also transfected cDNAs for all human SLCs into dissociated cultures of primary neurons from the mouse brain and found that more than 50 out of all SLCs could be localized in SVs. However, when we used CRISPR-Cas9 to tag some of the candidate SLCs in mice, some of them were found to be expressed outside the CNS, indicating that, while ectopic expression of these candidate SLCs could be localized on SVs, the endogenous counterparts were not localized on SVs. Here, we report our approach using the purification of SVs as the first step . Synaptophysin (Syp) is a specific marker for SVs and an anti-Syp antibody was used to immunoisolate SVs . Visualization by electronic microscopy (EM) ( , left panel) showed that the purified vesicles were homogeneous, with an average diameter of 40.44 ± 0.26 nm (n = 596 particles) ( , right panel), consistent with previous descriptions of SVs . Immunoblot analysis with 20 markers of subcellular organelles of neurons and 1 marker for glia indicates that our purifications were highly effective, with SV markers detected after purification with the anti-Syp antibody, but not that with the control immunoglobulin G (IgG). SV proteins included Syp , synaptotagmin (Syt1) , synatobrevin2 (Syb2) , SV2A , H + -ATPase , and vesicular neurotransmitter transporters for glutamate (VGLUT1, VGLUT2) and GABA (VGAT) . Immunoisolation by the anti-Syp antibody did not bring down markers for the synaptic membrane (with SNAP23 as a marker) , postsynaptic components (with PSD95 and GluN1 as markers) , the Golgi apparatus (with GM130 and Golgin 97 as markers) , early endosome (with early endosome-associated 1 [EEA1] as a marker) , the lysosome (with LC3B and cathepsinB as markers), the cytoplasma (with glyceraldehyde-3-phophate dehydrogenase [GAPDH] as a marker), mitochondria (with voltage-dependent anion channel [VDAC] as a marker), cytoplasmic membrane (with calcium voltage-gated channel subunit alpha 1 [CACNA1A] as a marker), axonal membrane (with glucose transporter type 4 [GluT4] as a marker), and glia membrane (with myelin basic protein [MBP] as a marker). These results indicated that the SVs we obtained were of high integrity and purity. To detect and quantify small molecules as candidate transmitters present in the purified SVs, capillary electrophoresis-mass spectrometry (CE-MS) was optimized and utilized . We found that the levels of classical neurotransmitters such as Glu, GABA, ACh, and 5-HT were significantly higher in SVs pulled down by the anti-Syp antibody than those in lysates pulled down by the control IgG . Consistent with previous reports , significant enrichment of neurotransmitters was observed only from SVs immunoisolated at near 0°C, but not at the room temperature (RT) . By contrast, another small molecule, alanine , was not elevated in SVs compared to the control. The amount of Glu was 171.1 ± 5.4 pmol/μg anti-Syp antibody (n = 10, ), approximately 10 times that of GABA (n = 10,17.81 ± 1.47 pmol/μg anti-Syp antibody, ). The amount of ACh was 1.29 ± 0.10 pmol/μg anti-Syp antibody (n = 10, ), approximately 0.072 that of GABA. The amount of 5-HT was 0.096 ± 0.017 pmol/μg anti-Syp antibody (n = 10, ). Thus, our purification and detection methods were highly reliable and sensitive enough to detect established neurotransmitters. Under the same conditions, we also detected Cr in SVs (n = 14, ). Amount of Cr in the SVs was found to be 3.43 ± 0.40 pmol/μg anti-Syp antibody , which was approximately 2% of Glu, 19% of GABA, 266% of ACh, and 3573% of 5-HT. It is unlikely that these could be attributable to different levels for different neurotransmitters in each SV, but more likely attributable to the relative abundance of SVs containing different neurotransmitters. Also, 85–90% neurons in the mouse brain were glutamatergic while 10–15% were GABAergic , which can explain our detection of Glu as approximately 10 times that of GABA . Similarly, cholinergic neurons (5.67 × 10 5 ) represented 0.81% of total number of neurons (approximately 70 million) in the mouse brain , serotonergic neurons (approximately 26,000) for 0.037% of total neurons . Assuming that the content of each neurotransmitter in a single SV is similar, extrapolation from the above data would suggest that approximately 1.3–2.15% of neurons in the mouse brain are creatinergic. To distinguish whether small molecules co-purified with SVs were in the SVs , or that they were just associated with the outside of SVs , we tested the dependence of the presence of these molecules in the SVs on temperature and on the electrochemical gradient of H + . Cr was significantly reduced in SVs purified at RT compared to that immunoisolated at near 0°C , supporting the presence of Cr inside, instead of outside, SVs. Classical neurotransmitters are stored in SVs with an acidic environment inside (pH of 5.6–6.4) . To further verify the storage of Cr in SVs and examine the role of H + electrochemical gradient, we applied pharmacological inhibitors during purification . The proton ionophore FCCP (carbonyl cyanide-4-(tri-fluoromethoxy) phenylhydrazone) was used to dissipate H + electrochemical gradient . FCCP significantly reduced the amount of Cr as well as classical neurotransmitters in SVs . The extent of FCCP-induced reduction was correlated with the value of pKa or PI (isoelectric point) for different molecule: 5-HT (with pKa predicted to 10 and 9.31, ) > Cr (PI of ~7.94, ) > GABA (PI of 7.33, ) > Glu (PI of 3.22, ). Nigericin, a K + /H + exchanger which dissipates ΔpH , also reduced the amount of Cr and classical neurotransmitters in SVs . Furthermore, in the presence of FCCP or nigericin, SV Cr was reduced to a level comparable to that pulled down by control IgG , demonstrating the storage of Cr in SVs was dependent on H + gradient. As a control, the non-neurotransmitter molecule alanine in SVs was not changed by the inhibitors . Reduction of SV Cr in mouse mutants lacking Slc6a8 SLC6A8 , located on the X chromosome, encodes a transporter for Cr and its loss-of-function (LOF) mutations caused behavioral deficits in humans and mice . To investigate whether SLC6A8 affects Cr in SVs, we generated Slc6a8 knockout (KO) mice. Exon 1 of the Slc6a8 gene was partially replaced with CreERT2-WPRE-polyA by CRISPR/Cas9 . Examination by reverse polymerase chain reaction (RT-PCR) and quantitative real-time reverse PCR (qPCR, ) showed that Slc6a8 mRNA was not detected in either male or female mutants, and significantly reduced in female heterozygous ( Slc6a8 +/- ). Consistent with previous reports, the body weights of Slc6a8 KO mice were reduced . Brain weight was not significantly different between Slc6a8 KO mice and WT mice . When we examined the contents of SVs isolated by the anti-Syp antibody vs the control IgG, significant reduction was only observed for Cr, but not classical neurotransmitters . While Cr pulled down by IgG was not significantly different between Slc6a8 -/Y and Slc6a8 +/Y mice, SV Cr purified by the anti-Syp antibody from Slc6a8 -/Y was reduced to approximately 1/3 that of the WT ( Slc6a8 +/Y ) littermates (n = 14, ). Compared to the IgG control, Cr in SVs was enriched in WT mice, but not in Slc6a8 -/Y mice . In both Slc6a8 -/Y and Slc6a8 +/Y mice, classical neurotransmitters in SVs were all enriched as compared to IgG controls . The amounts of Glu , GABA , ACh , and 5-HT in SVs were not different between Slc6a8 -/Y and Slc6a8 +/Y mice. Molecules not enriched in SVs from WT mice, such as alanine, were also unaffected by Slc6a8 KO . It is unlikely that the specific reduction of Cr in SVs from Slc6a8 KO mice was due to technical artifacts. First, the possibility of less SVs obtained from Slc6a8 KO mice was precluded by immunoblot analysis, as assessed by SV markers Syp, Syt, and H + -ATPase . Second, data collected by high-resolution MS (Q Exactive HF-X, Thermo Scientific, Waltham, MA) also revealed selective decrease of SV Cr (m/z = 132.0576) from Slc6a8 KO mice (n = 8, ), as quantified by the peak area. Peak areas for Glu (n = 8, , m/z = 148.0604), GABA (n = 8, , m/z = 104.0712), ACh (n = 8, , m/z = 146.1178), and alanine (n = 8, Ala, , m/z = 90.055) were not significantly different between SVs immmunoisolated with the anti-Syp antibody and control IgG from WT and Slc6a8 KO mice. However, peak areas and amplitude of Cr (n = 8, ) signal were significantly increased in SVs from WT mice (anti-Syp antibody vs IgG), but not that from Slc6a8 KO mice. Reduction of SV Cr in mouse mutants lacking Agat AGAT is the enzyme catalyzing the first step in Cr synthesis and its absence also led to Cr deficiency in the human brain and mental retardation . To investigate the requirement of AGAT for SV Cr, we utilized Agat ‘knockout-first’ mice (Figure 6A; ). The targeting cassette containing Frt (Flip recombination sites)-flanked EnS2A, an IRES::lacZ trapping cassette, and a floxed neo cassette were inserted downstream of exon 2 to interfere with normal splicing of Agat pre-mRNA. Examination by RT-PCR and quantitative RT-PCR showed reduction of Agat mRNA in Agat +/- and absence of Agat mRNA in Agat -/- mice. Body weight , but not brain weight , of Agat -/- mice was lower than both Agat +/+ and Agat +/- mice, which were similar to Slc6a8 KO mice. Immunoblot analysis showed SVs purified from the brains were not significantly different among Agat +/+ , Agat +/- , and Agat -/- mice , as supported by quantitative analysis of Syp, Syt, and H + -ATPase (n = 20, with two repeats for 10 samples). We analyzed small molecules present in SVs from Agat +/+ , Agat +/- , and Agat -/- mice. Cr was significantly enriched in SVs from all three genotypes compared to the IgG control . However, the level of Cr from Agat -/- mice was significantly lower than those from Agat +/+ and Agat +/- (n = 10, ). Glu , ACh , and 5-HT were all enriched in SVs (compared to IgG controls) and not significantly different among Agat +/+ , Agat +/- , and Agat -/- mice. GABA in SVs from Agat -/- mice was also decreased from Agat +/+ mice by 30%, to an extent less than that of Cr (78.4%). Alanine was not different among three genotypes of mice (n = 6, ). Thus, Cr and GABA, but not other neurotransmitters, in SVs were reduced in Agat KO mice. Pattern of SLC6A8 expression indicated by knockin mice We generated Slc6a8 HA knockin mice by CRISPR/Cas9. Three repeats of the hemagglutinin (HA) tag , the T2A sequence , and CreERT2 were inserted in-frame at the C terminus of the SLC6A8 protein . To examine the expression pattern of SLC6A8, we performed immunocytochemistry with an antibody against the HA epitope in Slc6a8 HA and WT mice. Slc6a8 HA mice showed positive signals in the olfactory bulb , the piriform cortex , the somatosensory cortex , the ventral posterior thalamus , the interpeduncular nucleus , and the pontine nuclei . In addition, moderate levels of immunoreactivity were observed in the motor cortex , the medial habenular nucleus , the hippocampus , and the cerebellum . These results were consistent with previous reports . WT mice were negative for anti-HA antibody staining . Ca 2+ -dependent release of Cr upon stimulation Classical neurotransmitters are released from the SVs into the synaptic cleft in a Ca 2+ -dependent manner after stimulation. For example, high extracellular potassium (K + ) stimulated Ca 2+ -dependent release of Glu, GABA, and other neurotransmitters in brain slices . Thus, 300-μm-thick coronal slices of the mouse brain within 1–2 mm posterior to the bregma were used because the cortex, the thalamus, the habenular nucleus, and the hippocampus were positive for SLC6A8 (cf., ). We monitored the effect of K + stimulation by recording neurons in the slices. Immediately after K + stimulation, pyramidal neurons in the CA1 region of the hippocampus were depolarized, firing a train of action potentials and reaching a large depolarization plateau in less than 1 min . K + -induced depolarization persisted for several minutes before returning to the baseline and being washed within 10 min. Thus, superfusates in 1 min fraction at the time points of 1.5 min before (control) and after K + stimulation, and 10 min after the wash were collected , and the metabolites in the superfusates were analyzed by CE-MS. In the presence of Ca 2+ , depolarization with elevated extracellular K + led to robust release of Glu and GABA in slices from WT ( Slc6a8 +/Y ) mice (n = 7 per group, ). After 10 min wash, levels of Glu and GABA returned to the baseline . In the presence of Ca 2+ , depolarization with elevated K + led to robust release of Cr. Extracellular Cr returned to the baseline after 10 min wash . For quantification, the stimulated releases of metabolites were calculated by subtracting the basal levels from the total releases in response to K + stimulation. In the presence of Ca 2+ , K + stimulation induced the efflux of Glu, GABA, and Cr at 0.46, 0.33, and 0.086 nmol/min, respectively (n = 7 per group) . From the detection limits of ACh and 5-HT in our system, we inferred that the efflux rate for ACh was lower than 0.001 nmol/min and that for 5-HT lower than 0.003 nmol/min. The efflux rate for Cr in brain slices is lower than those of Glu and GABA, but higher than those for ACh and 5-HT. Ca 2+ dependence of transmitter release was examined by comparing responses to ACSF without Ca 2+ or elevated K + (supplemented with 1 mM EGTA), elevated extracellular K + in the absence of Ca 2+ (supplemented with 1 mM EGTA), or K + in the presence of 2.5 mM Ca 2+ ( , n = 5 per group). In the absence of Ca 2+ , elevated K + stimulated the release of a small but significant amount of Glu and GABA, with efflux rates at 0.056 nmol/min and 0.066, respectively . In the presence of 2.5 mM Ca 2+ , elevated K + further augmented the release of Glu and GABA by 5–6 times, confirming previously reported Ca 2+ -dependent release of neurotransmitters in response to depolarization . Cr was also released both in a Ca 2+ -dependent and a Ca 2+ -independent manner . More Cr was released in response to K + stimulation in the presence of 2.5 mM Ca 2+ than that in the absence of Ca 2+ . These results demonstrate Ca 2+ -dependent release of Cr upon stimulation. Reduced Cr release in Slc6a8 and Agat mutant mice We examined whether Slc6a8 KO affected K + -induced release of Cr. While Glu and GABA were released in slices from Slc6a8 KO ( Slc6a8 +/Y ) mice at levels not significantly different from those of WT mice , release of Cr in response to K + stimulation was significantly reduced in Slc6a8 -/Y mice compared to Slc6a8 +/Y mice . The basal level of Cr in Slc6a8 KO mice was lower than that of WT mice. In addition, K + stimulation-induced release of Cr persisted to some extent even after 10 min of washout , possibly due to the inability of presynaptic terminals in Slc6a8 KO mice to reuptake Cr in the synaptic cleft (Figure 8). Experiments with slices from brains of Slc6a8 KO ( Slc6a8 -/Y ) mice showed that Ca 2+ -dependent release of either Glu or GABA was not affected by the genotype of Slc6a8 . By contrast, Ca 2+ -dependent release of Cr was abolished in Slc6a8 -/Y slices. Interestingly, Ca 2+ -independent release of Cr was reduced by a third, but did not reach statistical significance, in Slc6a8 -/Y slices. In the absence of Ca 2+ , the basal level of Cr was not changed in Slc6a8 KO mice. Taken together, these results indicate that there is Ca 2+ -dependent release of Cr upon stimulation and that SLC6A8 is required specifically for Ca 2+ -dependent release of Cr, but not for Ca 2+ -dependent release of other neurotransmitters such as Glu and GABA, or for Ca 2+ -independent release of Cr. Knockout of Agat selectively reduced K + evoked release of Cr, but not those of Glu or GABA (n = 5 per group, ). Although K + stimulation still elicited Cr release from brain slices of Agat +/- , the efflux rate in Agat -/- mice was reduced to less than 10% that in Agat +/+ mice and 20% that in Agat +/- . Cr inhibition of neocortical neurons Our own data and previous reports have shown SLC6A8 in the neocortex, with dense SLA6A8-HA immunoreactive fibers in layer 4 . Layer 5 neurons in the somatosensory cortex have been reported to express SLC6A8 previously . To investigate electrophysiological effects of Cr, we performed whole-cell patch-clamp recordings from the pyramidal neurons in layer 4/5 of the somatosensory cortex . Medium-sized pyramidal neurons with a membrane capacitance (Cm) of 114.96 ± 3.92 pF (n = 51, ) were recorded. These neurons exhibited regular firing patterns in response to depolarization current injection with moderate maximal evoked spiking frequencies of 10–30 spikes per 500 ms , increasing of inter-spike intervals during depolarizing steps , high action potential amplitude (81.64 ± 1.06 mV, ), and large spike half-width (1.12 ± 0.031 ms, ). Cr was bath-applied only after the evoked firing pattern reached a steady state. Of the 51 neurons, 16 were inhibited by 100 μM creatine . Fewer spikes were evoked in Cr-responsive neurons in response to depolarizing current injections during Cr application (25 pA step, 500 ms) . The inhibitory effect of Cr was reversible , typically observed within 2–3 min following Cr application (with maximal effect from 2 to 8 min) and disappeared after 10–25 min washout. This could be repeated by a second application of Cr. The rheobase, defined as the minimal electrical current necessary to elicit an action potential, was increased during bath application of Cr . The inhibitory effect was most obvious at near spike threshold. When a neuron was depolarized with a current of 50 pA above rheobase, the number of evoked spikes was decreased dramatically during Cr application . Cr also mildly inhibited the input resistance , slightly hyperpolarized resting membrane potential , or reduced amplitude of afterhyperpolarization (AHP) followed by the first evoked action potential . The spike threshold , amplitude and half width were not changed by Cr. The remaining 35 neurons were not responsive to Cr . Cr did not change electrical parameters tested, including evoked firing rates , rheobase , resting membrane potential , spike threshold amplitude , and half width . In addition, electrical properties of responsive neurons and unresponsive neurons were not significantly different. With the limited number of neurons recorded, the ratio of responsive neurons appeared higher in layer 4 or border of layer 4/5, than the deeper layer in layer 5 . SLC6A8-dependent uptake of Cr into the synaptosomes Along with enzymatic degradation, reuptake by transporters serves as an important way to remove neurotransmitters released into the synaptic cleft. As synaptosomes contain the apparatus for neurotransmission, they are often used for studying uptake of neurotransmitters . To investigate whether Cr uptake into synaptosomes required SLC6A8, we first examined whether SLC6A8 was present in synaptosomes. Using Slc6a8 HA knockin mice and an anti-HA antibody, we found that Slc6a8-HA was present and enriched in crude synaptosomal fraction (P2 fraction in , enrichment score: P2/H = 1.76 ± 0.15, n = 4) and synaptosomal fraction prepared using a discontinuous Ficoll gradient (Sy and 4-Sy fractions in , enrichment score: Sy/H = 2.02 ± 0.14, n = 4). The integrity of synaptosomes was confirmed by multiple markers of the synaptosomes , including the presynaptic membrane marker SNAP25 and the SV marker Syp (synaptophysin Syp) in presynaptic terminals, the postsynaptic density marker PSD95 and the postsynaptic membrane protein GluN1 , the synaptic membrane protein SNAP23 , the plasma membrane marker Na + -K + -ATPase , and the mitochondria marker VDAC . These were all enriched in our synaptosomal preparations. The cytosol marker GAPDH was also present in synaptosomes, whereas the oligodendrocyte marker MBP was nearly absent, suggesting that myelin pollution was largely avoided . We have also used EM to confirm the quality of our synaptosome preparations. As reported previously , synaptosomes were composed of membrane bounded structures (Sy in ) filled with synaptic vesicles (SV in ), sometimes with a segment of postsynaptic membrane along with the postsynaptic density (PSD in ) and mitochondria (Mt in ). The sizes of synaptosomes from WT mice and Slc6a8 knockout mice were similar, with areas of 0.245 ± 0.01 μm 2 (n = 302 particles) and 0.247 ± 0.01 μm 2 (n = 317 particles), respectively . We then examined whether SLC6A8 participated in Cr uptake into the synaptosomes. A mixture of 18 μM [ 14 C]-Cr (with a total radioactivity of 0.4 μCi) and 5 μM Cr was used, and uptake at 0°C measured at 10 min was the baseline . Cr uptake into synaptosomes from WT mice was stimulated approximately sevenfold at 37°C (Uptake, ) compared to 0°C (Ctrl, ). Cr uptake into synaptosomes from Slc6a8 knockout mice was less than three times compared to its control, and was decreased to approximately 1/3 of that of WT mice . Thus, SLC6A8 is necessary for uptake of Cr into the synaptosomes. Cr uptake into SVs Classical neurotransmitters were taken up in SVs in an ATP-dependent manner . We examined whether Cr could be transported into SVs. We used 10 μg anti-Syp antibody to purify SVs from mouse brains. Purified SVs were preincubated for 30 min to allow sufficient leakage of endogenous Cr, before being mixed with 1 mM [ 13 C]-Cr in the presence or absence of 4 mM ATP and placed at 25℃ for 10 min to allow adequate uptake. The SV content of [ 13 C]-Cr was then examined by CE-MS and high-performance liquid chromatography-mass spectrometry (HPLC-MS). Significantly more [ 13 C]-Cr were taken up by SVs in the presence of ATP, with about 10.3 pmol [ 13 C]-Cr transported into SVs (1.03 pmol/μg α-Syp or transportation rate of 0.103 pmol/min, n = 11, ). In summary, Cr could be transported into SVs in an ATP-dependent manner. At this point, we do not know what is the transporter(s) on the SVs for Cr uptake. SLC6A8 is only found in plasma membrane, not on SVs, and is not a candidate for Cr uptake into SVs. To search for new neurotransmitters, we tried several approaches. For example, we used Ca 2+ imaging to detect neuroactive substances in the cerebrospinal fluid (CSF), but it was difficult to rule out existing neurotransmitters and select responses from potentially new neurotransmitters. We also transfected cDNAs for all human SLCs into dissociated cultures of primary neurons from the mouse brain and found that more than 50 out of all SLCs could be localized in SVs. However, when we used CRISPR-Cas9 to tag some of the candidate SLCs in mice, some of them were found to be expressed outside the CNS, indicating that, while ectopic expression of these candidate SLCs could be localized on SVs, the endogenous counterparts were not localized on SVs. Here, we report our approach using the purification of SVs as the first step . Synaptophysin (Syp) is a specific marker for SVs and an anti-Syp antibody was used to immunoisolate SVs . Visualization by electronic microscopy (EM) ( , left panel) showed that the purified vesicles were homogeneous, with an average diameter of 40.44 ± 0.26 nm (n = 596 particles) ( , right panel), consistent with previous descriptions of SVs . Immunoblot analysis with 20 markers of subcellular organelles of neurons and 1 marker for glia indicates that our purifications were highly effective, with SV markers detected after purification with the anti-Syp antibody, but not that with the control immunoglobulin G (IgG). SV proteins included Syp , synaptotagmin (Syt1) , synatobrevin2 (Syb2) , SV2A , H + -ATPase , and vesicular neurotransmitter transporters for glutamate (VGLUT1, VGLUT2) and GABA (VGAT) . Immunoisolation by the anti-Syp antibody did not bring down markers for the synaptic membrane (with SNAP23 as a marker) , postsynaptic components (with PSD95 and GluN1 as markers) , the Golgi apparatus (with GM130 and Golgin 97 as markers) , early endosome (with early endosome-associated 1 [EEA1] as a marker) , the lysosome (with LC3B and cathepsinB as markers), the cytoplasma (with glyceraldehyde-3-phophate dehydrogenase [GAPDH] as a marker), mitochondria (with voltage-dependent anion channel [VDAC] as a marker), cytoplasmic membrane (with calcium voltage-gated channel subunit alpha 1 [CACNA1A] as a marker), axonal membrane (with glucose transporter type 4 [GluT4] as a marker), and glia membrane (with myelin basic protein [MBP] as a marker). These results indicated that the SVs we obtained were of high integrity and purity. To detect and quantify small molecules as candidate transmitters present in the purified SVs, capillary electrophoresis-mass spectrometry (CE-MS) was optimized and utilized . We found that the levels of classical neurotransmitters such as Glu, GABA, ACh, and 5-HT were significantly higher in SVs pulled down by the anti-Syp antibody than those in lysates pulled down by the control IgG . Consistent with previous reports , significant enrichment of neurotransmitters was observed only from SVs immunoisolated at near 0°C, but not at the room temperature (RT) . By contrast, another small molecule, alanine , was not elevated in SVs compared to the control. The amount of Glu was 171.1 ± 5.4 pmol/μg anti-Syp antibody (n = 10, ), approximately 10 times that of GABA (n = 10,17.81 ± 1.47 pmol/μg anti-Syp antibody, ). The amount of ACh was 1.29 ± 0.10 pmol/μg anti-Syp antibody (n = 10, ), approximately 0.072 that of GABA. The amount of 5-HT was 0.096 ± 0.017 pmol/μg anti-Syp antibody (n = 10, ). Thus, our purification and detection methods were highly reliable and sensitive enough to detect established neurotransmitters. Under the same conditions, we also detected Cr in SVs (n = 14, ). Amount of Cr in the SVs was found to be 3.43 ± 0.40 pmol/μg anti-Syp antibody , which was approximately 2% of Glu, 19% of GABA, 266% of ACh, and 3573% of 5-HT. It is unlikely that these could be attributable to different levels for different neurotransmitters in each SV, but more likely attributable to the relative abundance of SVs containing different neurotransmitters. Also, 85–90% neurons in the mouse brain were glutamatergic while 10–15% were GABAergic , which can explain our detection of Glu as approximately 10 times that of GABA . Similarly, cholinergic neurons (5.67 × 10 5 ) represented 0.81% of total number of neurons (approximately 70 million) in the mouse brain , serotonergic neurons (approximately 26,000) for 0.037% of total neurons . Assuming that the content of each neurotransmitter in a single SV is similar, extrapolation from the above data would suggest that approximately 1.3–2.15% of neurons in the mouse brain are creatinergic. To distinguish whether small molecules co-purified with SVs were in the SVs , or that they were just associated with the outside of SVs , we tested the dependence of the presence of these molecules in the SVs on temperature and on the electrochemical gradient of H + . Cr was significantly reduced in SVs purified at RT compared to that immunoisolated at near 0°C , supporting the presence of Cr inside, instead of outside, SVs. Classical neurotransmitters are stored in SVs with an acidic environment inside (pH of 5.6–6.4) . To further verify the storage of Cr in SVs and examine the role of H + electrochemical gradient, we applied pharmacological inhibitors during purification . The proton ionophore FCCP (carbonyl cyanide-4-(tri-fluoromethoxy) phenylhydrazone) was used to dissipate H + electrochemical gradient . FCCP significantly reduced the amount of Cr as well as classical neurotransmitters in SVs . The extent of FCCP-induced reduction was correlated with the value of pKa or PI (isoelectric point) for different molecule: 5-HT (with pKa predicted to 10 and 9.31, ) > Cr (PI of ~7.94, ) > GABA (PI of 7.33, ) > Glu (PI of 3.22, ). Nigericin, a K + /H + exchanger which dissipates ΔpH , also reduced the amount of Cr and classical neurotransmitters in SVs . Furthermore, in the presence of FCCP or nigericin, SV Cr was reduced to a level comparable to that pulled down by control IgG , demonstrating the storage of Cr in SVs was dependent on H + gradient. As a control, the non-neurotransmitter molecule alanine in SVs was not changed by the inhibitors . Slc6a8 SLC6A8 , located on the X chromosome, encodes a transporter for Cr and its loss-of-function (LOF) mutations caused behavioral deficits in humans and mice . To investigate whether SLC6A8 affects Cr in SVs, we generated Slc6a8 knockout (KO) mice. Exon 1 of the Slc6a8 gene was partially replaced with CreERT2-WPRE-polyA by CRISPR/Cas9 . Examination by reverse polymerase chain reaction (RT-PCR) and quantitative real-time reverse PCR (qPCR, ) showed that Slc6a8 mRNA was not detected in either male or female mutants, and significantly reduced in female heterozygous ( Slc6a8 +/- ). Consistent with previous reports, the body weights of Slc6a8 KO mice were reduced . Brain weight was not significantly different between Slc6a8 KO mice and WT mice . When we examined the contents of SVs isolated by the anti-Syp antibody vs the control IgG, significant reduction was only observed for Cr, but not classical neurotransmitters . While Cr pulled down by IgG was not significantly different between Slc6a8 -/Y and Slc6a8 +/Y mice, SV Cr purified by the anti-Syp antibody from Slc6a8 -/Y was reduced to approximately 1/3 that of the WT ( Slc6a8 +/Y ) littermates (n = 14, ). Compared to the IgG control, Cr in SVs was enriched in WT mice, but not in Slc6a8 -/Y mice . In both Slc6a8 -/Y and Slc6a8 +/Y mice, classical neurotransmitters in SVs were all enriched as compared to IgG controls . The amounts of Glu , GABA , ACh , and 5-HT in SVs were not different between Slc6a8 -/Y and Slc6a8 +/Y mice. Molecules not enriched in SVs from WT mice, such as alanine, were also unaffected by Slc6a8 KO . It is unlikely that the specific reduction of Cr in SVs from Slc6a8 KO mice was due to technical artifacts. First, the possibility of less SVs obtained from Slc6a8 KO mice was precluded by immunoblot analysis, as assessed by SV markers Syp, Syt, and H + -ATPase . Second, data collected by high-resolution MS (Q Exactive HF-X, Thermo Scientific, Waltham, MA) also revealed selective decrease of SV Cr (m/z = 132.0576) from Slc6a8 KO mice (n = 8, ), as quantified by the peak area. Peak areas for Glu (n = 8, , m/z = 148.0604), GABA (n = 8, , m/z = 104.0712), ACh (n = 8, , m/z = 146.1178), and alanine (n = 8, Ala, , m/z = 90.055) were not significantly different between SVs immmunoisolated with the anti-Syp antibody and control IgG from WT and Slc6a8 KO mice. However, peak areas and amplitude of Cr (n = 8, ) signal were significantly increased in SVs from WT mice (anti-Syp antibody vs IgG), but not that from Slc6a8 KO mice. Agat AGAT is the enzyme catalyzing the first step in Cr synthesis and its absence also led to Cr deficiency in the human brain and mental retardation . To investigate the requirement of AGAT for SV Cr, we utilized Agat ‘knockout-first’ mice (Figure 6A; ). The targeting cassette containing Frt (Flip recombination sites)-flanked EnS2A, an IRES::lacZ trapping cassette, and a floxed neo cassette were inserted downstream of exon 2 to interfere with normal splicing of Agat pre-mRNA. Examination by RT-PCR and quantitative RT-PCR showed reduction of Agat mRNA in Agat +/- and absence of Agat mRNA in Agat -/- mice. Body weight , but not brain weight , of Agat -/- mice was lower than both Agat +/+ and Agat +/- mice, which were similar to Slc6a8 KO mice. Immunoblot analysis showed SVs purified from the brains were not significantly different among Agat +/+ , Agat +/- , and Agat -/- mice , as supported by quantitative analysis of Syp, Syt, and H + -ATPase (n = 20, with two repeats for 10 samples). We analyzed small molecules present in SVs from Agat +/+ , Agat +/- , and Agat -/- mice. Cr was significantly enriched in SVs from all three genotypes compared to the IgG control . However, the level of Cr from Agat -/- mice was significantly lower than those from Agat +/+ and Agat +/- (n = 10, ). Glu , ACh , and 5-HT were all enriched in SVs (compared to IgG controls) and not significantly different among Agat +/+ , Agat +/- , and Agat -/- mice. GABA in SVs from Agat -/- mice was also decreased from Agat +/+ mice by 30%, to an extent less than that of Cr (78.4%). Alanine was not different among three genotypes of mice (n = 6, ). Thus, Cr and GABA, but not other neurotransmitters, in SVs were reduced in Agat KO mice. We generated Slc6a8 HA knockin mice by CRISPR/Cas9. Three repeats of the hemagglutinin (HA) tag , the T2A sequence , and CreERT2 were inserted in-frame at the C terminus of the SLC6A8 protein . To examine the expression pattern of SLC6A8, we performed immunocytochemistry with an antibody against the HA epitope in Slc6a8 HA and WT mice. Slc6a8 HA mice showed positive signals in the olfactory bulb , the piriform cortex , the somatosensory cortex , the ventral posterior thalamus , the interpeduncular nucleus , and the pontine nuclei . In addition, moderate levels of immunoreactivity were observed in the motor cortex , the medial habenular nucleus , the hippocampus , and the cerebellum . These results were consistent with previous reports . WT mice were negative for anti-HA antibody staining . 2+ -dependent release of Cr upon stimulation Classical neurotransmitters are released from the SVs into the synaptic cleft in a Ca 2+ -dependent manner after stimulation. For example, high extracellular potassium (K + ) stimulated Ca 2+ -dependent release of Glu, GABA, and other neurotransmitters in brain slices . Thus, 300-μm-thick coronal slices of the mouse brain within 1–2 mm posterior to the bregma were used because the cortex, the thalamus, the habenular nucleus, and the hippocampus were positive for SLC6A8 (cf., ). We monitored the effect of K + stimulation by recording neurons in the slices. Immediately after K + stimulation, pyramidal neurons in the CA1 region of the hippocampus were depolarized, firing a train of action potentials and reaching a large depolarization plateau in less than 1 min . K + -induced depolarization persisted for several minutes before returning to the baseline and being washed within 10 min. Thus, superfusates in 1 min fraction at the time points of 1.5 min before (control) and after K + stimulation, and 10 min after the wash were collected , and the metabolites in the superfusates were analyzed by CE-MS. In the presence of Ca 2+ , depolarization with elevated extracellular K + led to robust release of Glu and GABA in slices from WT ( Slc6a8 +/Y ) mice (n = 7 per group, ). After 10 min wash, levels of Glu and GABA returned to the baseline . In the presence of Ca 2+ , depolarization with elevated K + led to robust release of Cr. Extracellular Cr returned to the baseline after 10 min wash . For quantification, the stimulated releases of metabolites were calculated by subtracting the basal levels from the total releases in response to K + stimulation. In the presence of Ca 2+ , K + stimulation induced the efflux of Glu, GABA, and Cr at 0.46, 0.33, and 0.086 nmol/min, respectively (n = 7 per group) . From the detection limits of ACh and 5-HT in our system, we inferred that the efflux rate for ACh was lower than 0.001 nmol/min and that for 5-HT lower than 0.003 nmol/min. The efflux rate for Cr in brain slices is lower than those of Glu and GABA, but higher than those for ACh and 5-HT. Ca 2+ dependence of transmitter release was examined by comparing responses to ACSF without Ca 2+ or elevated K + (supplemented with 1 mM EGTA), elevated extracellular K + in the absence of Ca 2+ (supplemented with 1 mM EGTA), or K + in the presence of 2.5 mM Ca 2+ ( , n = 5 per group). In the absence of Ca 2+ , elevated K + stimulated the release of a small but significant amount of Glu and GABA, with efflux rates at 0.056 nmol/min and 0.066, respectively . In the presence of 2.5 mM Ca 2+ , elevated K + further augmented the release of Glu and GABA by 5–6 times, confirming previously reported Ca 2+ -dependent release of neurotransmitters in response to depolarization . Cr was also released both in a Ca 2+ -dependent and a Ca 2+ -independent manner . More Cr was released in response to K + stimulation in the presence of 2.5 mM Ca 2+ than that in the absence of Ca 2+ . These results demonstrate Ca 2+ -dependent release of Cr upon stimulation. Slc6a8 and Agat mutant mice We examined whether Slc6a8 KO affected K + -induced release of Cr. While Glu and GABA were released in slices from Slc6a8 KO ( Slc6a8 +/Y ) mice at levels not significantly different from those of WT mice , release of Cr in response to K + stimulation was significantly reduced in Slc6a8 -/Y mice compared to Slc6a8 +/Y mice . The basal level of Cr in Slc6a8 KO mice was lower than that of WT mice. In addition, K + stimulation-induced release of Cr persisted to some extent even after 10 min of washout , possibly due to the inability of presynaptic terminals in Slc6a8 KO mice to reuptake Cr in the synaptic cleft (Figure 8). Experiments with slices from brains of Slc6a8 KO ( Slc6a8 -/Y ) mice showed that Ca 2+ -dependent release of either Glu or GABA was not affected by the genotype of Slc6a8 . By contrast, Ca 2+ -dependent release of Cr was abolished in Slc6a8 -/Y slices. Interestingly, Ca 2+ -independent release of Cr was reduced by a third, but did not reach statistical significance, in Slc6a8 -/Y slices. In the absence of Ca 2+ , the basal level of Cr was not changed in Slc6a8 KO mice. Taken together, these results indicate that there is Ca 2+ -dependent release of Cr upon stimulation and that SLC6A8 is required specifically for Ca 2+ -dependent release of Cr, but not for Ca 2+ -dependent release of other neurotransmitters such as Glu and GABA, or for Ca 2+ -independent release of Cr. Knockout of Agat selectively reduced K + evoked release of Cr, but not those of Glu or GABA (n = 5 per group, ). Although K + stimulation still elicited Cr release from brain slices of Agat +/- , the efflux rate in Agat -/- mice was reduced to less than 10% that in Agat +/+ mice and 20% that in Agat +/- . Our own data and previous reports have shown SLC6A8 in the neocortex, with dense SLA6A8-HA immunoreactive fibers in layer 4 . Layer 5 neurons in the somatosensory cortex have been reported to express SLC6A8 previously . To investigate electrophysiological effects of Cr, we performed whole-cell patch-clamp recordings from the pyramidal neurons in layer 4/5 of the somatosensory cortex . Medium-sized pyramidal neurons with a membrane capacitance (Cm) of 114.96 ± 3.92 pF (n = 51, ) were recorded. These neurons exhibited regular firing patterns in response to depolarization current injection with moderate maximal evoked spiking frequencies of 10–30 spikes per 500 ms , increasing of inter-spike intervals during depolarizing steps , high action potential amplitude (81.64 ± 1.06 mV, ), and large spike half-width (1.12 ± 0.031 ms, ). Cr was bath-applied only after the evoked firing pattern reached a steady state. Of the 51 neurons, 16 were inhibited by 100 μM creatine . Fewer spikes were evoked in Cr-responsive neurons in response to depolarizing current injections during Cr application (25 pA step, 500 ms) . The inhibitory effect of Cr was reversible , typically observed within 2–3 min following Cr application (with maximal effect from 2 to 8 min) and disappeared after 10–25 min washout. This could be repeated by a second application of Cr. The rheobase, defined as the minimal electrical current necessary to elicit an action potential, was increased during bath application of Cr . The inhibitory effect was most obvious at near spike threshold. When a neuron was depolarized with a current of 50 pA above rheobase, the number of evoked spikes was decreased dramatically during Cr application . Cr also mildly inhibited the input resistance , slightly hyperpolarized resting membrane potential , or reduced amplitude of afterhyperpolarization (AHP) followed by the first evoked action potential . The spike threshold , amplitude and half width were not changed by Cr. The remaining 35 neurons were not responsive to Cr . Cr did not change electrical parameters tested, including evoked firing rates , rheobase , resting membrane potential , spike threshold amplitude , and half width . In addition, electrical properties of responsive neurons and unresponsive neurons were not significantly different. With the limited number of neurons recorded, the ratio of responsive neurons appeared higher in layer 4 or border of layer 4/5, than the deeper layer in layer 5 . Along with enzymatic degradation, reuptake by transporters serves as an important way to remove neurotransmitters released into the synaptic cleft. As synaptosomes contain the apparatus for neurotransmission, they are often used for studying uptake of neurotransmitters . To investigate whether Cr uptake into synaptosomes required SLC6A8, we first examined whether SLC6A8 was present in synaptosomes. Using Slc6a8 HA knockin mice and an anti-HA antibody, we found that Slc6a8-HA was present and enriched in crude synaptosomal fraction (P2 fraction in , enrichment score: P2/H = 1.76 ± 0.15, n = 4) and synaptosomal fraction prepared using a discontinuous Ficoll gradient (Sy and 4-Sy fractions in , enrichment score: Sy/H = 2.02 ± 0.14, n = 4). The integrity of synaptosomes was confirmed by multiple markers of the synaptosomes , including the presynaptic membrane marker SNAP25 and the SV marker Syp (synaptophysin Syp) in presynaptic terminals, the postsynaptic density marker PSD95 and the postsynaptic membrane protein GluN1 , the synaptic membrane protein SNAP23 , the plasma membrane marker Na + -K + -ATPase , and the mitochondria marker VDAC . These were all enriched in our synaptosomal preparations. The cytosol marker GAPDH was also present in synaptosomes, whereas the oligodendrocyte marker MBP was nearly absent, suggesting that myelin pollution was largely avoided . We have also used EM to confirm the quality of our synaptosome preparations. As reported previously , synaptosomes were composed of membrane bounded structures (Sy in ) filled with synaptic vesicles (SV in ), sometimes with a segment of postsynaptic membrane along with the postsynaptic density (PSD in ) and mitochondria (Mt in ). The sizes of synaptosomes from WT mice and Slc6a8 knockout mice were similar, with areas of 0.245 ± 0.01 μm 2 (n = 302 particles) and 0.247 ± 0.01 μm 2 (n = 317 particles), respectively . We then examined whether SLC6A8 participated in Cr uptake into the synaptosomes. A mixture of 18 μM [ 14 C]-Cr (with a total radioactivity of 0.4 μCi) and 5 μM Cr was used, and uptake at 0°C measured at 10 min was the baseline . Cr uptake into synaptosomes from WT mice was stimulated approximately sevenfold at 37°C (Uptake, ) compared to 0°C (Ctrl, ). Cr uptake into synaptosomes from Slc6a8 knockout mice was less than three times compared to its control, and was decreased to approximately 1/3 of that of WT mice . Thus, SLC6A8 is necessary for uptake of Cr into the synaptosomes. Classical neurotransmitters were taken up in SVs in an ATP-dependent manner . We examined whether Cr could be transported into SVs. We used 10 μg anti-Syp antibody to purify SVs from mouse brains. Purified SVs were preincubated for 30 min to allow sufficient leakage of endogenous Cr, before being mixed with 1 mM [ 13 C]-Cr in the presence or absence of 4 mM ATP and placed at 25℃ for 10 min to allow adequate uptake. The SV content of [ 13 C]-Cr was then examined by CE-MS and high-performance liquid chromatography-mass spectrometry (HPLC-MS). Significantly more [ 13 C]-Cr were taken up by SVs in the presence of ATP, with about 10.3 pmol [ 13 C]-Cr transported into SVs (1.03 pmol/μg α-Syp or transportation rate of 0.103 pmol/min, n = 11, ). In summary, Cr could be transported into SVs in an ATP-dependent manner. At this point, we do not know what is the transporter(s) on the SVs for Cr uptake. SLC6A8 is only found in plasma membrane, not on SVs, and is not a candidate for Cr uptake into SVs. While no neurotransmitter has been proven in a single paper, supportive evidence suggesting Cr as a possible new neurotransmitter has been presented here to the extent of any single previous papers. At various times and by different researchers, taurine , proline , D-aspartic acid , hydrogen sulfide , agmatine , DOPA , estradiol , β-alanine , and protons have been suspected as neurotransmitters, but they do not meet all the criteria. Some of the suspected molecules can be released upon stimulation or removed by transporters. Often, they have not been reproducibly found in SVs . Our discovery of Cr in SVs significantly raised the priority of testing the candidacy of Cr, and our further investigations have led to more evidence suggesting Cr as a neurotransmitter: (1) Cr is stored in SVs; (2) Ca 2+ -dependent release of Cr upon stimulation has been observed; (3) both Cr storage in SVs and Cr release are reduced when either the gene for Slc6a8 or the gene for Agat was deficient; and (4) Cr inhibits activities of pyramidal neurons in the neocortex; (5) Cr uptake into synaptosomes requires SLC6A8; and (6) Cr uptake into SVs was ATP-dependent. Of the above results, 1, 3, 4, and 6 are reported for the first time in this article. Furthermore, we have demonstrated that detection of Cr in SVs was lower than those for Glu and GABA, but higher than those for ACh and 5-HT, placing Cr at a level in the middle of known central transmitters . The storage of Cr in SVs is dependent on preserved H + gradient and Cr can be transported into SVs . There was a single previous report of Ca 2+ -dependent release of [ 3 H]Cr and endogenous Cr in response to electrical stimulation . We now provide evidence that Cr was released in response to extracellular K + stimulation (within 1–2 min) ( and ). Furthermore, Cr release was reduced when either the Slc6a8 or Agat gene was removed ( and ). Although the Ca 2+ -dependent component of K + -evoked Cr release was smaller compared to those of Glu and GABA, it nevertheless existed and was totally abolished by Slc6a8 knockout . The reported electrically evoked Cr release showed more Ca 2+ dependence . Taken together, our data and previous report supported a role of Cr as a neurotransmitter. Our observation of extremely low efflux rates of 5-HT or ACh may have arisen from very limited numbers of cholinergic or serotoninergic neurons in the sliced sections and rapid enzymatic degradation of these neurotransmitters. Cr uptake from the extracellular space into the cells was reported twice previously, once with brain slices showing sodium-dependent uptake of [ 3 H]Cr and once with synaptosomes . Our new results have not only replicated the synaptosome Cr uptake experiment but also shown the requirement of SLC6A8, a membrane transporter expressed in synaptosomes , for Cr uptake into synaptosomes. Transportation of Cr into synaptosomes by Slc6A8 may function for both the clearance of Cr from the synaptic cleft and recycling of Cr into SVs residing in neurons . In summary, in addition to confirming and extending previous results which have stood alone for more than a decade without replication or follow-up, we have obtained entirely new results suggesting the candidacy of Cr as a neurotransmitter. We discuss below the criteria for a neurotransmitter, Cr as a neurotransmitter, and the implications of Cr as a neurotransmitter. Criteria of a neurotransmitter The criteria for establishing a non-peptide small molecule as a neurotransmitter have varied from time to time and from author to author. Some textbooks simply state that a neurotransmitter is stored presynaptically, released upon stimulation, and active on postsynaptic neurons. The details of these three criteria can vary. For example, one textbook stipulates that “the substance must be present within the presynaptic neuron; the substance must be released in response to presynaptic depolarization, and the release must be Ca 2+ dependent; specific receptors for the substance be present on the postsynaptic cell” . Another states that “the molecule must be synthesized and stored in the presynaptic neuron; the molecule must be released by the presynaptic axon terminal upon stimulation; the molecule, when experimentally applied, must produce a response in the postsynaptic cell that mimics the response produced by the release of neurotransmitter from the presynaptic neuron” . The neuroscience textbook most widely used internationally for the last four decades lists four criteria for a neurotransmitter : it is synthesized in the presynaptic neuron; it is present within vesicles and is released in amounts sufficient to exert a defined action on the postsynaptic neuron or effector organ; when administered exogenously in reasonable concentrations, it mimics the action of the endogenous transmitter; and a specific mechanism usually exists for removing the substance from the synaptic cleft. These are similar, but not identical, to the classic textbook on neurotransmitters: a neurotransmitter “should be synthesized and released presynaptically; it must mimic the action of the endogenous compound that is release on nerve stimulation; and where possible, a pharmacological identity is required where drugs that either potentiate or block postsynaptic responses to the endogenously released agent also act identically to the suspected neurotransmitter that is administered” . The pharmacological criterion is listed in another textbook . Some authors note difficulties in establishing a CNS neurotransmitter. For example, a specialized neurotransmitter book states that “the candidate neurotransmitter should be present in the presynaptic terminal, be released when the presynaptic terminal is active, and when applied experimentally, induce faithful responses in the postsynaptic neuron. In practice, since central nervous system neurons continuously integrate diverse excitations and inhibitions, the last criterion is relaxed to demonstrating merely changes in such activity” . Solomon Snyder, a leading scientist of classic neurotransmitters, neuropeptides and their receptors, wrote that “designating a molecule as a transmitter depends on the criteria employed, the most common of which are that the substance is synthesized in neurons, released by their terminals, mimics the effects of physiologic neurotransmission and possess a mechanism for inactivation. However, with each new candidate the rules have been modified and broadened” . Evidence supporting Cr as a neurotransmitter Sixteen small molecules have been listed as neurotransmitters in the classic textbook . Among them, adenosine, arachidonic acid, nitric oxide, and carbon monoxide do not meet all four criteria at present. Cr appears to be better than these in meeting the criteria for a central neurotransmitter. The results obtained by us in this article have satisfied the criteria of for Cr to be a CNS neurotransmitter. The four criteria of Snyder and colleagues have been mostly met but the physiological neurotransmission would require more research because a specific synapse(s) would have to be defined and studied for putative creatinergic neurotransmission. This can take much longer in the CNS than the PNS. Some commonly accepted neurotransmitters have never satisfied this criterion in a strict sense. The mechanism of Cr removal criterion is met not only by the Cr uptake in brain slices and in synaptosomes , but also by our demonstration that SLC6A8 is required for synaptosome uptake of Cr. The four criteria of and are mostly satisfied with some details requiring further research. The synthesis requirement is usually not strict because there are transmitters synthesized in some cells and transported into others where they function as transmitters. Our discovery of Cr in SVs can replace the synthesis requirement because the presence in neuronal SVs provide sufficient evidence that Cr is located in the right location to function as a neurotransmitter. The level of Cr in SVs is higher than those of ACh and 5-HT ( and ). The amount of released Cr is in the same order of magnitude as those of Glu and GABA ( and ). The criterion of a specific mechanism of removal was met by Cr uptake experiments in slices and in synaptosomes , and further strengthened by our finding of SLC6A8 involvement in synaptosome uptake of Cr . Here, we report that Cr, at a concentration comparable to classical neurotransmitters, inhibits pyramidal neurons in specific regions of the mouse brain, with approximate 1/3 of pyramidal neurons responding to 100 μM Cr . In previous reports, 100 μM to 2 mM of GABA , 50 μM to 2 mM of Glu , 1–100 μM of DA , and 0.1–100 μM of 5-HT were bath-applied to investigate the physiological functions of neurotransmitters. Our results revealed that, when bath-applied, Cr could inhibit cortical neurons at 100 μM within several minutes, with a time course similar to that of 5-HT and DA , but significantly slower than that of Glu , GABA , and 5-HT . In a recent report, knockout of the Slc6a8 gene increased excitation of cortical neurons . Electrophysiological characterization of pyramidal neurons in the prefrontal cortex (PFC) found increased evoked firing frequency. Because we have shown that Cr inhibit a fraction of pyramidal neurons in the neocortex , this article provides in vivo evidence consistent with the possibility of Cr as an inhibitory neurotransmitter. Differences between Cr and classic neurotransmitters At this point, we do not have a molecularly defined receptor for Cr, only inferring its presence from the electrophysiological responses to Cr. We speculate that Cr may act on G-protein-coupled receptors (GPCRs), rather than the fast-acting ligand-gated ion channels, such as AMPA or NMDA receptors for Glu and GABA A receptor for GABA. There have been previous reports of Cr effects on neurons, including Cr as a partial agonist for GABA A receptors . These effects require very high concentrations of Cr (in the 10 mM range). There was also a report of the opposite effect: that Cr (at a concentration above 500 μM) increased neuronal excitability through NMDA receptors after incubation for 60 min, with a time course significantly slower than those of classic neurotransmitters . Ca 2+ -independent component of Cr release induced by extracellular K + was more prominent than those of Glu or GABA. One possibility was that Ca 2+ -independent Cr release came from glia because high GAMT levels were reported in astrocytes and oligodendrites . As reported, other neuromodulators such as taurine can be released from astrocytes or slices in a Ca 2+ -independent manner. In addition, in the absence of potassium stimulation, Ca 2+ depletion increased release of taurine in cultured astrocytes or in striatum in vivo . Similarly, in Slc6a8 KO slices, Ca 2+ depletion also increased Cr baseline compared to that in normal ACSF . With much longer history of research, ACh and 5-HT now have more evidence in other aspects than Cr as a central transmitter, especially because there are many agonists and antagonists for ACh and 5-HT to prove an additional criterion that is required in some , but not the majority of, textbooks for a neurotransmitter. The pharmacology criterion will take some time and effort because so far no effort has been made to find agonists or antagonists for Cr. Implications of SLC6A8 and Cr It is notable that SLC6A8 belongs to the NTT family, with multiple members already shown to transport neurotransmitters . The uptake experiments by others and us indicate that SLC6A8 transports Cr into neurons within the brain. AGAT is also expressed in the brain, but in cells not expressing SLC6A8 . Cr and its precursor were thought to be transported between different cells in the nervous system. When SLC6A8 was completely missing, such as in homozygous SLC6A8 -deficient patients, Cr treatment was not effective. But if SLC6A8 was partially active, Cr was effective . Intractable epilepsy in a female with heterozygous SLC6A8 mutation was completely treated by Cr . Our data of inhibitory effect of creatine on cortical neurons might provide a new mechanism to its anti-epileptic activity . The absence of SLC6A8 expression in astrocytes whose endfeet lining microcapillary endothelial cells (MCEC) form the blood–brain barrier (BBB) indicates that Cr in the brain does not rely on import from the periphery and is instead mainly synthesized in the brain . SLC6A8 functions within the brain to transport Cr and its precursors not as a major contributor of Cr transport across the BBB. It is thought to mediate Cr uptake into the presynaptic terminal based on studies of synaptosomes . Cr is known to have effects other than an energy source, and Cr supplement has been thought to be beneficial for children, pregnant and lactating women, and old people . Cr has been reported to improve human mental performance . Cr has been used as potential treatment in animal models of neurodegenerative diseases . Our work will stimulate further research to distinguish which of the previously suspected effects of Cr is not attributed to its role as an energy storage, but can be attributed to its role as a neurotransmitter. Search for new neurotransmitters Our work may stimulate the search for more neurotransmitters. Our discovery indicates that the hunt for neurotransmitters stopped decades ago because of technical difficulties not due to the absence of more neurotransmitters. The fact that most of the known small-molecule neurotransmitters have been found because of their peripheral effects also argues that what is missing is the concerted efforts to uncover central neurotransmitters with no peripheral effects. New neurotransmitters may be discovered from candidates which have been long suspected and from previously unsuspected molecules or even previously unknown molecules. Innovative approaches should be taken to uncover molecules with no previous suspicions or hints. Highly purified SVs, SVs from different regions of the brain, and SVs with specific SLCs offer some of the starting points for future research. The criteria for establishing a non-peptide small molecule as a neurotransmitter have varied from time to time and from author to author. Some textbooks simply state that a neurotransmitter is stored presynaptically, released upon stimulation, and active on postsynaptic neurons. The details of these three criteria can vary. For example, one textbook stipulates that “the substance must be present within the presynaptic neuron; the substance must be released in response to presynaptic depolarization, and the release must be Ca 2+ dependent; specific receptors for the substance be present on the postsynaptic cell” . Another states that “the molecule must be synthesized and stored in the presynaptic neuron; the molecule must be released by the presynaptic axon terminal upon stimulation; the molecule, when experimentally applied, must produce a response in the postsynaptic cell that mimics the response produced by the release of neurotransmitter from the presynaptic neuron” . The neuroscience textbook most widely used internationally for the last four decades lists four criteria for a neurotransmitter : it is synthesized in the presynaptic neuron; it is present within vesicles and is released in amounts sufficient to exert a defined action on the postsynaptic neuron or effector organ; when administered exogenously in reasonable concentrations, it mimics the action of the endogenous transmitter; and a specific mechanism usually exists for removing the substance from the synaptic cleft. These are similar, but not identical, to the classic textbook on neurotransmitters: a neurotransmitter “should be synthesized and released presynaptically; it must mimic the action of the endogenous compound that is release on nerve stimulation; and where possible, a pharmacological identity is required where drugs that either potentiate or block postsynaptic responses to the endogenously released agent also act identically to the suspected neurotransmitter that is administered” . The pharmacological criterion is listed in another textbook . Some authors note difficulties in establishing a CNS neurotransmitter. For example, a specialized neurotransmitter book states that “the candidate neurotransmitter should be present in the presynaptic terminal, be released when the presynaptic terminal is active, and when applied experimentally, induce faithful responses in the postsynaptic neuron. In practice, since central nervous system neurons continuously integrate diverse excitations and inhibitions, the last criterion is relaxed to demonstrating merely changes in such activity” . Solomon Snyder, a leading scientist of classic neurotransmitters, neuropeptides and their receptors, wrote that “designating a molecule as a transmitter depends on the criteria employed, the most common of which are that the substance is synthesized in neurons, released by their terminals, mimics the effects of physiologic neurotransmission and possess a mechanism for inactivation. However, with each new candidate the rules have been modified and broadened” . Sixteen small molecules have been listed as neurotransmitters in the classic textbook . Among them, adenosine, arachidonic acid, nitric oxide, and carbon monoxide do not meet all four criteria at present. Cr appears to be better than these in meeting the criteria for a central neurotransmitter. The results obtained by us in this article have satisfied the criteria of for Cr to be a CNS neurotransmitter. The four criteria of Snyder and colleagues have been mostly met but the physiological neurotransmission would require more research because a specific synapse(s) would have to be defined and studied for putative creatinergic neurotransmission. This can take much longer in the CNS than the PNS. Some commonly accepted neurotransmitters have never satisfied this criterion in a strict sense. The mechanism of Cr removal criterion is met not only by the Cr uptake in brain slices and in synaptosomes , but also by our demonstration that SLC6A8 is required for synaptosome uptake of Cr. The four criteria of and are mostly satisfied with some details requiring further research. The synthesis requirement is usually not strict because there are transmitters synthesized in some cells and transported into others where they function as transmitters. Our discovery of Cr in SVs can replace the synthesis requirement because the presence in neuronal SVs provide sufficient evidence that Cr is located in the right location to function as a neurotransmitter. The level of Cr in SVs is higher than those of ACh and 5-HT ( and ). The amount of released Cr is in the same order of magnitude as those of Glu and GABA ( and ). The criterion of a specific mechanism of removal was met by Cr uptake experiments in slices and in synaptosomes , and further strengthened by our finding of SLC6A8 involvement in synaptosome uptake of Cr . Here, we report that Cr, at a concentration comparable to classical neurotransmitters, inhibits pyramidal neurons in specific regions of the mouse brain, with approximate 1/3 of pyramidal neurons responding to 100 μM Cr . In previous reports, 100 μM to 2 mM of GABA , 50 μM to 2 mM of Glu , 1–100 μM of DA , and 0.1–100 μM of 5-HT were bath-applied to investigate the physiological functions of neurotransmitters. Our results revealed that, when bath-applied, Cr could inhibit cortical neurons at 100 μM within several minutes, with a time course similar to that of 5-HT and DA , but significantly slower than that of Glu , GABA , and 5-HT . In a recent report, knockout of the Slc6a8 gene increased excitation of cortical neurons . Electrophysiological characterization of pyramidal neurons in the prefrontal cortex (PFC) found increased evoked firing frequency. Because we have shown that Cr inhibit a fraction of pyramidal neurons in the neocortex , this article provides in vivo evidence consistent with the possibility of Cr as an inhibitory neurotransmitter. At this point, we do not have a molecularly defined receptor for Cr, only inferring its presence from the electrophysiological responses to Cr. We speculate that Cr may act on G-protein-coupled receptors (GPCRs), rather than the fast-acting ligand-gated ion channels, such as AMPA or NMDA receptors for Glu and GABA A receptor for GABA. There have been previous reports of Cr effects on neurons, including Cr as a partial agonist for GABA A receptors . These effects require very high concentrations of Cr (in the 10 mM range). There was also a report of the opposite effect: that Cr (at a concentration above 500 μM) increased neuronal excitability through NMDA receptors after incubation for 60 min, with a time course significantly slower than those of classic neurotransmitters . Ca 2+ -independent component of Cr release induced by extracellular K + was more prominent than those of Glu or GABA. One possibility was that Ca 2+ -independent Cr release came from glia because high GAMT levels were reported in astrocytes and oligodendrites . As reported, other neuromodulators such as taurine can be released from astrocytes or slices in a Ca 2+ -independent manner. In addition, in the absence of potassium stimulation, Ca 2+ depletion increased release of taurine in cultured astrocytes or in striatum in vivo . Similarly, in Slc6a8 KO slices, Ca 2+ depletion also increased Cr baseline compared to that in normal ACSF . With much longer history of research, ACh and 5-HT now have more evidence in other aspects than Cr as a central transmitter, especially because there are many agonists and antagonists for ACh and 5-HT to prove an additional criterion that is required in some , but not the majority of, textbooks for a neurotransmitter. The pharmacology criterion will take some time and effort because so far no effort has been made to find agonists or antagonists for Cr. It is notable that SLC6A8 belongs to the NTT family, with multiple members already shown to transport neurotransmitters . The uptake experiments by others and us indicate that SLC6A8 transports Cr into neurons within the brain. AGAT is also expressed in the brain, but in cells not expressing SLC6A8 . Cr and its precursor were thought to be transported between different cells in the nervous system. When SLC6A8 was completely missing, such as in homozygous SLC6A8 -deficient patients, Cr treatment was not effective. But if SLC6A8 was partially active, Cr was effective . Intractable epilepsy in a female with heterozygous SLC6A8 mutation was completely treated by Cr . Our data of inhibitory effect of creatine on cortical neurons might provide a new mechanism to its anti-epileptic activity . The absence of SLC6A8 expression in astrocytes whose endfeet lining microcapillary endothelial cells (MCEC) form the blood–brain barrier (BBB) indicates that Cr in the brain does not rely on import from the periphery and is instead mainly synthesized in the brain . SLC6A8 functions within the brain to transport Cr and its precursors not as a major contributor of Cr transport across the BBB. It is thought to mediate Cr uptake into the presynaptic terminal based on studies of synaptosomes . Cr is known to have effects other than an energy source, and Cr supplement has been thought to be beneficial for children, pregnant and lactating women, and old people . Cr has been reported to improve human mental performance . Cr has been used as potential treatment in animal models of neurodegenerative diseases . Our work will stimulate further research to distinguish which of the previously suspected effects of Cr is not attributed to its role as an energy storage, but can be attributed to its role as a neurotransmitter. Our work may stimulate the search for more neurotransmitters. Our discovery indicates that the hunt for neurotransmitters stopped decades ago because of technical difficulties not due to the absence of more neurotransmitters. The fact that most of the known small-molecule neurotransmitters have been found because of their peripheral effects also argues that what is missing is the concerted efforts to uncover central neurotransmitters with no peripheral effects. New neurotransmitters may be discovered from candidates which have been long suspected and from previously unsuspected molecules or even previously unknown molecules. Innovative approaches should be taken to uncover molecules with no previous suspicions or hints. Highly purified SVs, SVs from different regions of the brain, and SVs with specific SLCs offer some of the starting points for future research. Generation of knockout and knockin mice Slc6a8 knockout and knockin mice were generated using CRISPR-Cas9-mediated genome engineering techniques by Beijing Biocytogen (Beijing, China). Agat ‘knockout-first’ mice were purchased from CAM-SU GRC (Suzhou, China). All mutations were validated by Southern blot analysis, tail junction PCR, and DNA sequencing. Transgenic mice will be provided upon request. RT-PCR and qPCR Total RNA of whole brains from mice of different genotypes was extracted using the Buffer RZ (Tiangen, no. RK14, Beijing, China) and reverse transcribed into complementary DNA (cDNA) using the RevertAid First-Strand cDNA synthesis kit (Thermo Scientific, K1622). qPCR was performed using the Taq Pro Universal SYBR qPCR Master Mix (Vazyme, Q712-02) on Bio-Rad CFX-96 Touch Real-time PCR System (Bio-Rad, USA). Glyceraldehyde-3-phosphate dehydrogenase ( Gapdh ) was used as an internal control. ΔCt (difference in cycle threshold) was calculated for each sample (ΔCt = Ct Target gene − Ct GAPDH ) for further evaluation of relative mRNA expression levels in different genotypes. The sequence specificities of the primers were examined. Three pairs of primers targeting different genes were used: Slc6a8 forward, 5′- GTCTGGTGACGAGAAGAAGGG -3′, Slc6a8 reverse, 5′- CCACGCACGACATGATGAAGT -3′; Agat forward, 5′- cacagtggaggtgaaggccaatacatat -3′, Agat reverse, 5′- ccgcctcacggtcactcct -3′; Gapdh forward, 5′- AGGTCGGTGTGAACGGATTTG -3′, Gapdh reverse, 5′- TGTAGACCATGTAGTTGAGGTCA -3′. Primers for reverse PCR were designed to obtain complete coding sequences based on information obtained from the National Center for Biotechnology Information (NCBI): Slc6a8 forward, 5′- atggcgaaaaagagcgctgaaaacg -3′; Slc6a8 reverse, 5′- ttacatgacactctccaccacgacgacc -3′; Agat forward, 5′- atgctacgggtgcggtgtct -3′; Agat reverse, 5′- tcagtcaaagtaggactgaagggtgcct -3′. PCR products were electrophoresed on 1% agarose gels, stained with GelRed, visualized under UV illumination, and photographed. Immunoblot analysis Samples were loaded onto 10% polyacrylamide gels with the PAGE system (#1610183, Bio-Rad Laboratories, USA) and run in the SDS running buffer (25 mM Tris, 192 mM glycine, 0.1% SDS, pH 8.8) for 25 min at 80 V followed by 25–45 min at 200 V. Afterward, proteins were transferred to immobilon NC transfer membranes (HATF00010, Millipore) at 400 mA for 2 hr in transfer buffer (25 mM Tris, 192 mM glycine, 20% methanol). Membranes were blocked in 5% fat-free milk powder in TBST (25 mM Tris, 150 mM NaCl, 0.2% Tween-20 [P1397, Sigma, St. Louis, MO], pH 7.4 adjusted with HCl) and incubated overnight with the indicated primary antibodies dissolved in TBST containing 2% BSA. Primary antibodies are listed below: rabbit anti-synaptophysin (dilution 1:5000, cat. no. 101002, synaptic systems [SySy], Goettingen, Germany), rabbit anti-synaptotagmin1/2 (dilution 1:2000, cat. no. 105002, SySy), rabbit anti-proton ATPase (dilution 1:1000, cat. no. 109002, SySy), rabbit anti-synaptobrevin 2 (dilution 1:5000, cat. no. 104202, SySy), rabbit anti-SV2A (dilution 1:2000, cat. no. 109003, SySy), rabbit anti-VGlut1 (dilution 1:4000, 135302, SySy), rabbit anti-VGlut2 (dilution 1:2000, 135402, SySy), rabbit anti-VGAT (dilution 1:4000, 131002, SySy), rabbit anti-SNAP23 (dilution 1:2000, cat. no. 111202, SySy), mouse anti-PSD95 (dilution 1:5000, cat. no. 75028, NeuroMab, Davis, CA), mouse anti-GluN1(dilution 1:5000, cat. no. 114011, SySy), rabbit anti-GM130 (dilution 1:1000, cat. no. ab52649, Abcam), rabbit anti-Golgin-97 (dilution 1:2000, cat. no. 13192, Cell Signaling Technology, MA), rabbit anti-EEA1 (dilution 1:2000, cat. no. 3288, Cell Signaling Technology), rabbit anti-LC3B (dilution 1:1000, cat. no. 2775S, Cell Signaling Technology), goat anti-CathepsinB (dilution 1:2000, AF965, R&D Systems, Minneapolis, MN), rabbit anti-GAPDH (dilution 1:1000, cat. no. 2118S, Cell Signaling Technology), rabbit anti-GluT4 (dilution 1:1000, ab33780, Abcam, Cambridge, UK), rabbit anti-CACNA1A (dilution 1:300, 152103, SySy), rabbit anti-VDAC (dilution 1:1000, cat. no. 4661S, Cell Signaling Technology), rabbit anti-MBP (1:1000, cat. no. 295003, SySy), mouse anti-Creatine Kinase B (dilution 1:5000, cat. no. MAB9076, R&D Systems), rabbit anti-HA (dilution 1:2000, CST3724, Cell Signaling Technology), and rabbit anti SNAP25 (dilution 1:2000, ab109105, Abcam, UK) antibodies. Membranes were washed in three washing steps in TBST (each for 5 min) and incubated with peroxidase-conjugated secondary antibodies for 2–3 hr at 4°C. The second antibodies used were anti-rabbit (dilution 1:5000, A6154, Sigma), anti-mouse (dilution 1:5000, 715-035-150, Jackson ImmunoResearch, West Grove, Philadelphia, USA) or rabbit anti-goat IgG secondary antibodies (dilution 1:1000, cat. no. ab6741, Abcam). After repeated washing, signals were visualized using a ChemiDoc XRS + System (Bio-Rad Laboratories). Isolation of synaptic vesicles Our purification procedures for SVs were based on previously established immunoisolation methods . Protein G magnetic beads (cat. no. 88848, Thermo Fisher Scientific, Waltham, MA) were washed three times with IP buffer (100 mM potassium tartrate, 4 mM HEPES-KOH, 2 mM MgCl 2 , pH 7.4) supplemented with a complete protease inhibitor cocktail (Roche, Basel, Switzerland). Then, 5 μg monoclonal anti-Syp antibody directed against a cytoplasmic epitope (cat. no. 101011, SySy) or control mouse IgG (10400C, Thermo Fisher Scientific) was used to incubate with 20–30 μl beads for 30 min at RT in 2% BSA dissolved in IP buffer. Under this condition, 4–4.5 μg of antibody was coupled, as determined by western blot and Coomassie Blue staining . Immunoisolation of SVs was carried out at 0–2°C to prevent vesicular content leakage (with RT as a control). Briefly, the whole mouse brain was homogenized in 3 ml of IP buffer with a glass/Teflon homogenizer (20 strokes at 2000 rpm, WHEATON, USA, and WIGGENS WB2000-M, Germany) immediately after decapitation. Homogenates were centrifuged for 25 min at 35,000 × g, and the supernatant was adjusted to approximately 3 mg/ml protein (NanoDrop 2000C, Thermo Fisher Scientific). To capture the SVs for content detection, about 200 μl of supernatants (per 5 μg anti-Syp/IgG) was incubated with pre-coupled beads for 2.25 hr under slow rotation at 2°C. Beads were washed six times for further western blot analysis and vesicular content detection. For pharmacological blockade of H + -gradient across SV membrane, the mix of supernatants and pre-coupled beads was diluted into 1.2 ml before the addition of inhibitors. Determination of vesicular contents To extract SV contents, immunoisolates were treated with 50 μl ultra-pure water. Then, 100 μl methanol together with 100 μl acetonitrile was added to precipitate proteins in samples. After centrifugation for 20 min at 16,8000 × g , supernatants were collected and centrifuged for 20 min at 2000 × g to remove beads and proteins. Samples were pre-frozen with liquid nitrogen and vacuum dried at –45°C overnight. Dried samples were kept frozen and resuspended with 50 μl of 0.2 μM 13 C-creatine (internal control) immediately before detection. CE-MS was used to verify and quantify small molecules. CE/MS detection was applied with the coupling of PA800 plus CE system (Beckman Coulter, Brea, CA) and mass spectrometry (TRIPLE QUAD 5500, AB SCIEX or Q Exactive HF-X, Thermo Scientific). Before SV content detection, we optimized MS detection of classical neurotransmitters, Cr, and amino acids in positive ion mode. Firstly, the fragment ions (Q3) for a given molecule (precursor ions, Q1) were determined by either systematic scanning of standard sample solution (0.1 μM in 10% acetate acid) or referring to database ( https://www.mzcloud.org ). Secondly, optimal values of collision energy (CE), collision cell exit potential (CXP), and declustering potentials (DP) were determined for each pair of Q1/Q3. Thirdly, optimal combination of parameters (Q1/Q3, CE, CXP, DP) was chosen for each molecule. In addition, parameters were adjusted every 2–3 mo for best signal-to-noise ratios. CE/MS separations were carried out by capillaries (OptiMS silica surface cartridge, Beckman Coulter). The CE background electrolyte was 10% acetate acid. Each new separation capillary was activated with rinsing under 100 psi sequentially with methanol for 10 min (forward), methanol for 3 min (reverse), H 2 O for 10 min (forward), H 2 O for 3 min (reverse), 0.1 M NaOH for 10 min (forward), water for 5 min (reverse), 0.1 M HCl for 10 min (forward), followed by water for 10 min and then 10% acetate acid for 10 min (forward) and 3 min (reverse), prior to the first use. Between analyses, the capillary was rinsed with 10% acetate acid under a 100 psi pressure for 5 min (forward) flowed by 75 psi for 4 min. The sample (50 μl) was injected with 2.5–4 psi for 30 s. Separation voltage of 25 kV was applied for 25 min. To maintain stably spray during CE separation, ion spray voltage was applied at 1.7–1.9 kV. MS data were collected 5 min after CE separation. Finally, the capillary was washed with 10% acetate acid for 10 min, followed by methanol for 20 min and then 10% acetate acid for 20 min. Standard solutions of 0.2 μM 13 C-Cr (internal control) and analytes were used to plot standard curves. Linear standard curves (R 2 > 0.98, for most cases, R 2 > 0.99), calculated from peak area ratios corresponding to analytes and internal standards, were obtained for all molecules tested. The concentration ranges used for standards of Glu, GABA, ACh, 5-HT, Cr, and alanine were 0.03–10 μM, 0.003–1 μM, 0.0003–0.1 μM, 0.003–1 μM, 0.03–1 μM, and 0.03–1 μM, respectively. Standard curves were made at least twice for a given capillary. Analytes of SV contents were calculated using the standard curves and normalized to the amount of anti-Syp antibody conjugated to the beads. Electron microscopy All EM grids were glow discharged for 30 s using a plasma cleaner (Harrick PDC-32G-2, plasma cleaners, Ithaca, NY). To free SVs from beads, 25 μl 0.1 M glycine-HCl (PH = 2) was incubated for 1 min and quickly neutralized with 25 μl 0.1 M Tris (pH = 10). Beads were quickly removed and 2–4 μl aliquots of SVs were applied to the carbon-coated copper grids (Zhong Jing Ke Yi, Beijing, China). After 1 min, the grid was dried with a filter paper (Whatman No. 1), and placed in the water, and then immediately stained using 2% uranyl acetate for 30 s. At last uranyl acetate was removed and the grid was air dried. The grids were examined on a JEM-F200 electron microscope (JEOL, Tokyo, Japan) operated at 200 kV. Images were recorded using a 4k × 4k COMS One view camera (Gatan, Abingdon, UK). Fixation of synaptosomal pellets was performed by immersion with pre-warmed 2.5% glutaraldehyde in 0.1 M phosphate buffer (pH 7.4) at RT for 2 hr. After washing four times with 0.1 M phosphate buffer (pH 7.4) every 15 min, samples were post-fixed with 1% osmium tetroxide (w/v) at 4°C for 1 hr and then washed three times. Following en bloc staining with 2% uranyl acetate (w/v) at 4°C overnight, samples were dehydrated and embedded in fresh resin, polymerized at 65°C for 24 hr. Ultrathin (70 nm) sections were obtained using Leica UC7 ultramicrotome (Leica Microsystems, Wetzlar, Germany) and recorded on 80 kV in a JEOL Jem-1400 transmission electron-microscope (JEOL) using a CMOS camera (XAROSA, EMSIS, Munster, Germany). Immunohistochemistry Adult mice were anesthetized by i.p. injection with 2% 2,2,2-tribromoethanol (T48402, Sigma) in saline at a dose of 400 mg/kg and perfused trancardially with 0.9% saline, followed by 4% PFA in PBS (137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 , pH = 7.4). Brains were cryoprotected with 30% sucrose in 30% sucrose 0.1 M PB (81 mM Na 2 HPO 4, 19 mM NaH 2 PO 4 ) and sectioned in the coronal plane (40 μm thick) using a Cryostat (Leica 3050S). For anti-HA immunostaining, we used a rabbit monoclonal anti-HA antibody (1:500 in 0.3% Triton in PBS; 48 hr incubation at 4°C; #3724, Cell Signaling Technology), followed by a goat anti-rabbit Alexa Fluor 546 secondary antibody (1:1000; overnight at 4°C; # A-11035, Invitrogen, Waltham, MA). Sections were mounted in a medium containing 50% glycerol, cover-slipped, and sealed with nail polish. Images were acquired using virtual slide microscope (Olympus VS120-S6-W, Tokyo, Japan) and a laser-scanning confocal microscope (Zeiss 710, Cambridge, UK) and brain structures inferred with an established mouse brain atlas . Preparations of brain slices Male C57 mice (of 30–38 days old) were anesthetized with pentobarbital (250 mg/kg) and decapitated. Brains were quickly removed and placed into ice-cold, low-calcium, high-magnesium artificial cerebrospinal fluid (ACSF) with sodium replaced by choline. The medium consisted of 120 mM choline chloride, 2.5 mM KCl, 7 mM MgSO 4 , 0.5 mM CaCl 2 , 1.25 mM NaH 2 PO 4 , 5 mM sodium ascorbate, 3 mM sodium pyruvate, 26 mM NaHCO 3 , and 25 mM D--glucose, and was pre-equilibrated with 95% O 2 –5% CO 2 . Coronal brain slices (300 μm thick) were cut with a vibratome (Leica VT1200S). Slices were incubated for 1 hr at 34°C with oxygenated ACSF containing 124 mM NaCl, 2.5 mM KCl, 2 mM MgSO 4 , 2.5 mM CaCl 2 , 1.25 mM NaH 2 PO 4 , 26 mM NaHCO 3 , and 10 mM D--glucose. Evoked release from brain slices Coronal brain slices (each 300 μm thick, typically with a wet weight of 17–20 mg) were transferred into a specially designed superfusion chamber with a volume of approximately 200 μl, containing freshly 95% O 2 /5% CO 2 oxygenated ACSF. Slices were equilibrated for 10 min in ACSF at a superfusion rate of 0.9–1.25 ml/min. The ‘control’ sample was collected for 1 min just before high K + stimulation (K-ACSF, 70 mM KCl replacing equal amount of NaCl). We waited for 30 s to allow K + stimulus to immerse the slices (dead volume for solution transition of 200 μl and chamber volume of 200 μl), then the sample ‘70 mM K’ in response to K-ACSF was collected for another 1 min. Following 10 min of washout period, we collected the third sample of ‘wash’ for 1 min. To detect Ca 2+ -dependent release, slices were pre-incubated for 10 min with normal ACSF and equilibrated with Ca 2+ -free ACSF (containing 1 mM EGTA to chelate extracellular Ca 2+ ) for 10 min. The baseline sample ‘0 Ca 2+ ACSF’ was collected for 1 min. Superfusion solution was changed to Ca 2+ -free K-ACSF for 2 min and sample ‘0 Ca 2+ 70 mM K’ was collected (dead volume for solution transition of 400 μl and chamber volume of 200 μl). Then, the solution was changed back to normal ACSF for 10 min and K-ACSF for 2 min. The sample ‘2.5 mM Ca 70 mM K’ for the last minute was collected. Samples were subjected to CE-MS in a method similar to SV content detection, except for the following: (1) standards were dissolved in ACSF or other buffers used in release experiment; (2) concentration ranges used for standards of Glu was from 0.003 to 1 μM; and (3) to protect the MS from salt pollution, data were collected from 10 to 20 min during CE separation. Patch-clamp recordings Slices were transferred to a recording chamber on an upright fluorescent microscope equipped with differential interference contrast optics (DIC; Olympus BX51WI). Slices were submerged and superfused with ACSF at about 2.8 ml/min at 24–26°C. Whole-cell patch recordings were routinely achieved from layer 4/5 medium-sized pyramidal neurons from the somatosensory cortex. Patch pipettes (3–5 MΩ) contained 140 mM K-gluconate, 10 mM HEPES, 0.5 mM EGTA, 5 mM KCl, 3 mM Na 2 -ATP, 0.5 mM Na 3 GTP, and 4 mM MgCl 2 (with pH adjusted to 7.3 and osmolarity of 290 mOsm/kg). Current-clamp recordings were carried out with a computer-controlled amplifier (Multiclamp 700B, Molecular Devices) and traces were digitized at 10 kHz (DigiData 1550B, Molecular Devices). Data were collected and analyzed using Clampfitor Clampex 10 software (Molecular Devices). Cells were characterized by their membrane responses and firing patterns during hyperpolarizing and depolarizing current steps (–100 to +500 pA, increment: 50 pA or 25 pA, 500 ms). Regular spiking pyramidal neurons were identified by moderate maximal spiking frequencies (20–60 Hz, i.e., 10–30 spikes per 500 ms, ), increasing of inter-spike intervals during depolarizing step , high action amplitude , and large half width . After the mean firing frequency evoked by current injections reached the steady state for at least 5 min (typically 20–30 min following the formation of whole-cell configuration), 100 μM Cr was bath-applied for 6 min. Typically, Cr was applied for a second time following washout to reconfirm the effects. Synaptosome preparation Synaptosomes were isolated by Ficoll/sucrose density-gradient centrifugation . Whole brains from adult male mice were homogenized with 15 strokes at 900 rpm in buffer A (320 mM sucrose, 1 mM EDTA, 1 mM EGTA, 10 mM Tris–HCl, pH 7.4, with a complete protease inhibitor cocktail; Roche). The homogenate (H fraction) was centrifuged at 1000 × g for 10 min to precipitate the membrane fragments and nuclei (P1 fraction). Supernatant was centrifuged again at 1000 × g for 10 min, and the resulting supernatant (S1) was centrifuged at 12,000 × g for 20 min. Supernatant was the S2 fraction, and the pellet was resuspended with buffer A and centrifuged at 12,000 × g for 20 min. The resulting pellet was crude synaptosomes (P2 fraction), containing synaptosomes with mitochondria and microsomes. Crude synaptosomes (P2 fraction) was resuspended with 150–200 μl buffer B (320 mM sucrose and 10 mM Tris–HCI [pH 7.4]). The sample was carefully overlaid on the top of a gradient of 2 ml of 7.5% (wt/vol in buffer B) Ficoll and 1.8 ml of 13% (wt/vol in buffer B) Ficoll and centrifuged at 98,000 × g for 45 min at 2–4°C in a swinging-bucket rotor. A myelin band was present near the surface, and the synaptosomes band (fraction Sy) was present at the interface between the 13 and 7.5% Ficoll layers, with the mitochondria being pelleted at the bottom. For further western analysis, the supernatant was divided into six fractions (600 μl for each fraction) and the mitochondria pellet was discarded. The isolated synaptosomes was included in fraction 4. For western analysis, fractions H, S1, P1, S2, P2, and Sy were adjusted to 0.5 mg/ml by bicinchoninic acid assay (BCA) method with reference to NanoDrop 2000 Spectrophotometers. 3.35 μg protein was loaded for each lane. Fractions 1–6 were loaded with the same volume (10 μl composed of 6.7 μl sample and 3.3 μl loading buffer) for each lane. Creatine uptake into synaptosomes To remove Ficoll, we diluted the synaptosomal band (480 μl) with 4.3 ml of a pH 7.4 buffer C containing (in mM) 240 mannitol, 10 glucose, 4.8 potassium gluconate, 2.2 calcium gluconate, 1.2 MgSO 4 , 1.2 KH 2 PO 4 , and 25 HEPES-Tris. The sample was then centrifuged at 12,000 × g, and the pellet was resuspended with buffer C. Uptake experiments were either performed at 37°C or at 0°C (control). For each sample, 25–43 μg of synaptosomes (with a volume of 40–50 μl) were added to 360 μl buffer containing (in mmol/l) 100 NaCl, 40 mannitol, 10 glucose, 4.8 potassium gluconate, 2.2 calcium gluconate, 1.2 MgSO 4 , 1.2 KH 2 PO 4 , 25 HEPES, and 25 Tris (pH adjusted to 7.4). A mixture of 18 μM [ 14 C]-creatine (0.4 μCi) and 5 μM creatine was quickly added. After 10 min, uptake was terminated by the addition of 1 ml of NaCl-free ice-cold buffer C. Samples were immediately filtered, under vacuum, through a Whatman GF/C glass filter (1825-025) pre-wetted with buffer C. Filters were further washed with 10 ml of ice-cold buffer C, dissolved in scintillation fluid, and the radioactivity determined by liquid scintillation spectrometry. Creatine uptake into SVs The uptake of 13 C-creatine was assayed according to a conventional procedure with slight modifications: the immunoisolated SVs by 10 μg Syp antibody (101011, SySy) were resuspended with the uptake buffer (150 mM meglumine-tartrate, 4 mM KCl, 4 mM MgSO 4 , 10 mM HEPES-KOH [pH 7.4], and cOmplete EDTA-free protease inhibitor cocktail) containing 4 mM Mg-ATP or additional 4 mM MgSO 4 , followed by preincubation for 30 min at 25℃. The uptake reaction was started by addition of 1 mM 13 C-creatine dissolved in the uptake buffer with a final volume of 125 μl (pH at 6.8). After 10 min at 25℃, 1 ml of ice-cold uptake buffer was added to the incubation to stop the reaction, followed by five more times washing. The SV contents were extracted using the protocol described in the determination of vesicular contents part. Then, 100 nM Cr was used as the internal control. CE-MS and LC-MS were used to verify and quantify the creatine contents of samples. A Vanquish UHPLC system coupled to a Q Exactive HF-X mass spectrometer (both instrument from Thermo Fisher Scientific) was used for LC-MS analysis along with SeQuant ZIC-HILIC column (150 mm × 2.1 mm, 3.5 μm, Merck Millipore, 150442) in the positive mode and SeQuant ZIC-pHILIC column (150 mm × 2.1 mm, 5 μm, Merck Millipore, 150460) in the negative mode. For ZIC-HILIC column, the mobile phase A was 0.1% formic acid in water and the mobile phase B was 0.1% formic acid in acetonitrile. The linear gradient was as follows: 0 min, 80% B; 6 min, 50% B; 13 min, 50% B; 14 min, 20% B; 18 min, 20% B; 18.5 min, 80% B; and 30 min, 80% B. The flow rate used was 300 μl/min and the column temperature was maintained at 30°C. For ZIC-pHILIC column, the mobile phase A is 20 mM ammonium carbonate in water, adjusted to pH 9.0 with 0.1% ammonium hydroxide solution (25%), and the mobile phase B is 100% acetonitrile. The linear gradient was as follows: 0 min, 80% B; 2 min, 80% B; 19 min, 20% B; 20 min, 80% B; and 30 min, 80% B. The flow rate used was 150 μl/min, and the column temperature was 25°C. Samples were maintained at 4°C in Vanquish autosampler. Then, 3 µl of extracted metabolites were injected for each run. IP samples were subjected to ZIC-HILIC column in positive mode for major metabolites detection, and then subject to ZIC-pHILIC column in negative mode for orthogonal detection. Slc6a8 knockout and knockin mice were generated using CRISPR-Cas9-mediated genome engineering techniques by Beijing Biocytogen (Beijing, China). Agat ‘knockout-first’ mice were purchased from CAM-SU GRC (Suzhou, China). All mutations were validated by Southern blot analysis, tail junction PCR, and DNA sequencing. Transgenic mice will be provided upon request. Total RNA of whole brains from mice of different genotypes was extracted using the Buffer RZ (Tiangen, no. RK14, Beijing, China) and reverse transcribed into complementary DNA (cDNA) using the RevertAid First-Strand cDNA synthesis kit (Thermo Scientific, K1622). qPCR was performed using the Taq Pro Universal SYBR qPCR Master Mix (Vazyme, Q712-02) on Bio-Rad CFX-96 Touch Real-time PCR System (Bio-Rad, USA). Glyceraldehyde-3-phosphate dehydrogenase ( Gapdh ) was used as an internal control. ΔCt (difference in cycle threshold) was calculated for each sample (ΔCt = Ct Target gene − Ct GAPDH ) for further evaluation of relative mRNA expression levels in different genotypes. The sequence specificities of the primers were examined. Three pairs of primers targeting different genes were used: Slc6a8 forward, 5′- GTCTGGTGACGAGAAGAAGGG -3′, Slc6a8 reverse, 5′- CCACGCACGACATGATGAAGT -3′; Agat forward, 5′- cacagtggaggtgaaggccaatacatat -3′, Agat reverse, 5′- ccgcctcacggtcactcct -3′; Gapdh forward, 5′- AGGTCGGTGTGAACGGATTTG -3′, Gapdh reverse, 5′- TGTAGACCATGTAGTTGAGGTCA -3′. Primers for reverse PCR were designed to obtain complete coding sequences based on information obtained from the National Center for Biotechnology Information (NCBI): Slc6a8 forward, 5′- atggcgaaaaagagcgctgaaaacg -3′; Slc6a8 reverse, 5′- ttacatgacactctccaccacgacgacc -3′; Agat forward, 5′- atgctacgggtgcggtgtct -3′; Agat reverse, 5′- tcagtcaaagtaggactgaagggtgcct -3′. PCR products were electrophoresed on 1% agarose gels, stained with GelRed, visualized under UV illumination, and photographed. Samples were loaded onto 10% polyacrylamide gels with the PAGE system (#1610183, Bio-Rad Laboratories, USA) and run in the SDS running buffer (25 mM Tris, 192 mM glycine, 0.1% SDS, pH 8.8) for 25 min at 80 V followed by 25–45 min at 200 V. Afterward, proteins were transferred to immobilon NC transfer membranes (HATF00010, Millipore) at 400 mA for 2 hr in transfer buffer (25 mM Tris, 192 mM glycine, 20% methanol). Membranes were blocked in 5% fat-free milk powder in TBST (25 mM Tris, 150 mM NaCl, 0.2% Tween-20 [P1397, Sigma, St. Louis, MO], pH 7.4 adjusted with HCl) and incubated overnight with the indicated primary antibodies dissolved in TBST containing 2% BSA. Primary antibodies are listed below: rabbit anti-synaptophysin (dilution 1:5000, cat. no. 101002, synaptic systems [SySy], Goettingen, Germany), rabbit anti-synaptotagmin1/2 (dilution 1:2000, cat. no. 105002, SySy), rabbit anti-proton ATPase (dilution 1:1000, cat. no. 109002, SySy), rabbit anti-synaptobrevin 2 (dilution 1:5000, cat. no. 104202, SySy), rabbit anti-SV2A (dilution 1:2000, cat. no. 109003, SySy), rabbit anti-VGlut1 (dilution 1:4000, 135302, SySy), rabbit anti-VGlut2 (dilution 1:2000, 135402, SySy), rabbit anti-VGAT (dilution 1:4000, 131002, SySy), rabbit anti-SNAP23 (dilution 1:2000, cat. no. 111202, SySy), mouse anti-PSD95 (dilution 1:5000, cat. no. 75028, NeuroMab, Davis, CA), mouse anti-GluN1(dilution 1:5000, cat. no. 114011, SySy), rabbit anti-GM130 (dilution 1:1000, cat. no. ab52649, Abcam), rabbit anti-Golgin-97 (dilution 1:2000, cat. no. 13192, Cell Signaling Technology, MA), rabbit anti-EEA1 (dilution 1:2000, cat. no. 3288, Cell Signaling Technology), rabbit anti-LC3B (dilution 1:1000, cat. no. 2775S, Cell Signaling Technology), goat anti-CathepsinB (dilution 1:2000, AF965, R&D Systems, Minneapolis, MN), rabbit anti-GAPDH (dilution 1:1000, cat. no. 2118S, Cell Signaling Technology), rabbit anti-GluT4 (dilution 1:1000, ab33780, Abcam, Cambridge, UK), rabbit anti-CACNA1A (dilution 1:300, 152103, SySy), rabbit anti-VDAC (dilution 1:1000, cat. no. 4661S, Cell Signaling Technology), rabbit anti-MBP (1:1000, cat. no. 295003, SySy), mouse anti-Creatine Kinase B (dilution 1:5000, cat. no. MAB9076, R&D Systems), rabbit anti-HA (dilution 1:2000, CST3724, Cell Signaling Technology), and rabbit anti SNAP25 (dilution 1:2000, ab109105, Abcam, UK) antibodies. Membranes were washed in three washing steps in TBST (each for 5 min) and incubated with peroxidase-conjugated secondary antibodies for 2–3 hr at 4°C. The second antibodies used were anti-rabbit (dilution 1:5000, A6154, Sigma), anti-mouse (dilution 1:5000, 715-035-150, Jackson ImmunoResearch, West Grove, Philadelphia, USA) or rabbit anti-goat IgG secondary antibodies (dilution 1:1000, cat. no. ab6741, Abcam). After repeated washing, signals were visualized using a ChemiDoc XRS + System (Bio-Rad Laboratories). Our purification procedures for SVs were based on previously established immunoisolation methods . Protein G magnetic beads (cat. no. 88848, Thermo Fisher Scientific, Waltham, MA) were washed three times with IP buffer (100 mM potassium tartrate, 4 mM HEPES-KOH, 2 mM MgCl 2 , pH 7.4) supplemented with a complete protease inhibitor cocktail (Roche, Basel, Switzerland). Then, 5 μg monoclonal anti-Syp antibody directed against a cytoplasmic epitope (cat. no. 101011, SySy) or control mouse IgG (10400C, Thermo Fisher Scientific) was used to incubate with 20–30 μl beads for 30 min at RT in 2% BSA dissolved in IP buffer. Under this condition, 4–4.5 μg of antibody was coupled, as determined by western blot and Coomassie Blue staining . Immunoisolation of SVs was carried out at 0–2°C to prevent vesicular content leakage (with RT as a control). Briefly, the whole mouse brain was homogenized in 3 ml of IP buffer with a glass/Teflon homogenizer (20 strokes at 2000 rpm, WHEATON, USA, and WIGGENS WB2000-M, Germany) immediately after decapitation. Homogenates were centrifuged for 25 min at 35,000 × g, and the supernatant was adjusted to approximately 3 mg/ml protein (NanoDrop 2000C, Thermo Fisher Scientific). To capture the SVs for content detection, about 200 μl of supernatants (per 5 μg anti-Syp/IgG) was incubated with pre-coupled beads for 2.25 hr under slow rotation at 2°C. Beads were washed six times for further western blot analysis and vesicular content detection. For pharmacological blockade of H + -gradient across SV membrane, the mix of supernatants and pre-coupled beads was diluted into 1.2 ml before the addition of inhibitors. To extract SV contents, immunoisolates were treated with 50 μl ultra-pure water. Then, 100 μl methanol together with 100 μl acetonitrile was added to precipitate proteins in samples. After centrifugation for 20 min at 16,8000 × g , supernatants were collected and centrifuged for 20 min at 2000 × g to remove beads and proteins. Samples were pre-frozen with liquid nitrogen and vacuum dried at –45°C overnight. Dried samples were kept frozen and resuspended with 50 μl of 0.2 μM 13 C-creatine (internal control) immediately before detection. CE-MS was used to verify and quantify small molecules. CE/MS detection was applied with the coupling of PA800 plus CE system (Beckman Coulter, Brea, CA) and mass spectrometry (TRIPLE QUAD 5500, AB SCIEX or Q Exactive HF-X, Thermo Scientific). Before SV content detection, we optimized MS detection of classical neurotransmitters, Cr, and amino acids in positive ion mode. Firstly, the fragment ions (Q3) for a given molecule (precursor ions, Q1) were determined by either systematic scanning of standard sample solution (0.1 μM in 10% acetate acid) or referring to database ( https://www.mzcloud.org ). Secondly, optimal values of collision energy (CE), collision cell exit potential (CXP), and declustering potentials (DP) were determined for each pair of Q1/Q3. Thirdly, optimal combination of parameters (Q1/Q3, CE, CXP, DP) was chosen for each molecule. In addition, parameters were adjusted every 2–3 mo for best signal-to-noise ratios. CE/MS separations were carried out by capillaries (OptiMS silica surface cartridge, Beckman Coulter). The CE background electrolyte was 10% acetate acid. Each new separation capillary was activated with rinsing under 100 psi sequentially with methanol for 10 min (forward), methanol for 3 min (reverse), H 2 O for 10 min (forward), H 2 O for 3 min (reverse), 0.1 M NaOH for 10 min (forward), water for 5 min (reverse), 0.1 M HCl for 10 min (forward), followed by water for 10 min and then 10% acetate acid for 10 min (forward) and 3 min (reverse), prior to the first use. Between analyses, the capillary was rinsed with 10% acetate acid under a 100 psi pressure for 5 min (forward) flowed by 75 psi for 4 min. The sample (50 μl) was injected with 2.5–4 psi for 30 s. Separation voltage of 25 kV was applied for 25 min. To maintain stably spray during CE separation, ion spray voltage was applied at 1.7–1.9 kV. MS data were collected 5 min after CE separation. Finally, the capillary was washed with 10% acetate acid for 10 min, followed by methanol for 20 min and then 10% acetate acid for 20 min. Standard solutions of 0.2 μM 13 C-Cr (internal control) and analytes were used to plot standard curves. Linear standard curves (R 2 > 0.98, for most cases, R 2 > 0.99), calculated from peak area ratios corresponding to analytes and internal standards, were obtained for all molecules tested. The concentration ranges used for standards of Glu, GABA, ACh, 5-HT, Cr, and alanine were 0.03–10 μM, 0.003–1 μM, 0.0003–0.1 μM, 0.003–1 μM, 0.03–1 μM, and 0.03–1 μM, respectively. Standard curves were made at least twice for a given capillary. Analytes of SV contents were calculated using the standard curves and normalized to the amount of anti-Syp antibody conjugated to the beads. All EM grids were glow discharged for 30 s using a plasma cleaner (Harrick PDC-32G-2, plasma cleaners, Ithaca, NY). To free SVs from beads, 25 μl 0.1 M glycine-HCl (PH = 2) was incubated for 1 min and quickly neutralized with 25 μl 0.1 M Tris (pH = 10). Beads were quickly removed and 2–4 μl aliquots of SVs were applied to the carbon-coated copper grids (Zhong Jing Ke Yi, Beijing, China). After 1 min, the grid was dried with a filter paper (Whatman No. 1), and placed in the water, and then immediately stained using 2% uranyl acetate for 30 s. At last uranyl acetate was removed and the grid was air dried. The grids were examined on a JEM-F200 electron microscope (JEOL, Tokyo, Japan) operated at 200 kV. Images were recorded using a 4k × 4k COMS One view camera (Gatan, Abingdon, UK). Fixation of synaptosomal pellets was performed by immersion with pre-warmed 2.5% glutaraldehyde in 0.1 M phosphate buffer (pH 7.4) at RT for 2 hr. After washing four times with 0.1 M phosphate buffer (pH 7.4) every 15 min, samples were post-fixed with 1% osmium tetroxide (w/v) at 4°C for 1 hr and then washed three times. Following en bloc staining with 2% uranyl acetate (w/v) at 4°C overnight, samples were dehydrated and embedded in fresh resin, polymerized at 65°C for 24 hr. Ultrathin (70 nm) sections were obtained using Leica UC7 ultramicrotome (Leica Microsystems, Wetzlar, Germany) and recorded on 80 kV in a JEOL Jem-1400 transmission electron-microscope (JEOL) using a CMOS camera (XAROSA, EMSIS, Munster, Germany). Adult mice were anesthetized by i.p. injection with 2% 2,2,2-tribromoethanol (T48402, Sigma) in saline at a dose of 400 mg/kg and perfused trancardially with 0.9% saline, followed by 4% PFA in PBS (137 mM NaCl, 2.7 mM KCl, 10 mM Na 2 HPO 4 , 1.8 mM KH 2 PO 4 , pH = 7.4). Brains were cryoprotected with 30% sucrose in 30% sucrose 0.1 M PB (81 mM Na 2 HPO 4, 19 mM NaH 2 PO 4 ) and sectioned in the coronal plane (40 μm thick) using a Cryostat (Leica 3050S). For anti-HA immunostaining, we used a rabbit monoclonal anti-HA antibody (1:500 in 0.3% Triton in PBS; 48 hr incubation at 4°C; #3724, Cell Signaling Technology), followed by a goat anti-rabbit Alexa Fluor 546 secondary antibody (1:1000; overnight at 4°C; # A-11035, Invitrogen, Waltham, MA). Sections were mounted in a medium containing 50% glycerol, cover-slipped, and sealed with nail polish. Images were acquired using virtual slide microscope (Olympus VS120-S6-W, Tokyo, Japan) and a laser-scanning confocal microscope (Zeiss 710, Cambridge, UK) and brain structures inferred with an established mouse brain atlas . Male C57 mice (of 30–38 days old) were anesthetized with pentobarbital (250 mg/kg) and decapitated. Brains were quickly removed and placed into ice-cold, low-calcium, high-magnesium artificial cerebrospinal fluid (ACSF) with sodium replaced by choline. The medium consisted of 120 mM choline chloride, 2.5 mM KCl, 7 mM MgSO 4 , 0.5 mM CaCl 2 , 1.25 mM NaH 2 PO 4 , 5 mM sodium ascorbate, 3 mM sodium pyruvate, 26 mM NaHCO 3 , and 25 mM D--glucose, and was pre-equilibrated with 95% O 2 –5% CO 2 . Coronal brain slices (300 μm thick) were cut with a vibratome (Leica VT1200S). Slices were incubated for 1 hr at 34°C with oxygenated ACSF containing 124 mM NaCl, 2.5 mM KCl, 2 mM MgSO 4 , 2.5 mM CaCl 2 , 1.25 mM NaH 2 PO 4 , 26 mM NaHCO 3 , and 10 mM D--glucose. Coronal brain slices (each 300 μm thick, typically with a wet weight of 17–20 mg) were transferred into a specially designed superfusion chamber with a volume of approximately 200 μl, containing freshly 95% O 2 /5% CO 2 oxygenated ACSF. Slices were equilibrated for 10 min in ACSF at a superfusion rate of 0.9–1.25 ml/min. The ‘control’ sample was collected for 1 min just before high K + stimulation (K-ACSF, 70 mM KCl replacing equal amount of NaCl). We waited for 30 s to allow K + stimulus to immerse the slices (dead volume for solution transition of 200 μl and chamber volume of 200 μl), then the sample ‘70 mM K’ in response to K-ACSF was collected for another 1 min. Following 10 min of washout period, we collected the third sample of ‘wash’ for 1 min. To detect Ca 2+ -dependent release, slices were pre-incubated for 10 min with normal ACSF and equilibrated with Ca 2+ -free ACSF (containing 1 mM EGTA to chelate extracellular Ca 2+ ) for 10 min. The baseline sample ‘0 Ca 2+ ACSF’ was collected for 1 min. Superfusion solution was changed to Ca 2+ -free K-ACSF for 2 min and sample ‘0 Ca 2+ 70 mM K’ was collected (dead volume for solution transition of 400 μl and chamber volume of 200 μl). Then, the solution was changed back to normal ACSF for 10 min and K-ACSF for 2 min. The sample ‘2.5 mM Ca 70 mM K’ for the last minute was collected. Samples were subjected to CE-MS in a method similar to SV content detection, except for the following: (1) standards were dissolved in ACSF or other buffers used in release experiment; (2) concentration ranges used for standards of Glu was from 0.003 to 1 μM; and (3) to protect the MS from salt pollution, data were collected from 10 to 20 min during CE separation. Slices were transferred to a recording chamber on an upright fluorescent microscope equipped with differential interference contrast optics (DIC; Olympus BX51WI). Slices were submerged and superfused with ACSF at about 2.8 ml/min at 24–26°C. Whole-cell patch recordings were routinely achieved from layer 4/5 medium-sized pyramidal neurons from the somatosensory cortex. Patch pipettes (3–5 MΩ) contained 140 mM K-gluconate, 10 mM HEPES, 0.5 mM EGTA, 5 mM KCl, 3 mM Na 2 -ATP, 0.5 mM Na 3 GTP, and 4 mM MgCl 2 (with pH adjusted to 7.3 and osmolarity of 290 mOsm/kg). Current-clamp recordings were carried out with a computer-controlled amplifier (Multiclamp 700B, Molecular Devices) and traces were digitized at 10 kHz (DigiData 1550B, Molecular Devices). Data were collected and analyzed using Clampfitor Clampex 10 software (Molecular Devices). Cells were characterized by their membrane responses and firing patterns during hyperpolarizing and depolarizing current steps (–100 to +500 pA, increment: 50 pA or 25 pA, 500 ms). Regular spiking pyramidal neurons were identified by moderate maximal spiking frequencies (20–60 Hz, i.e., 10–30 spikes per 500 ms, ), increasing of inter-spike intervals during depolarizing step , high action amplitude , and large half width . After the mean firing frequency evoked by current injections reached the steady state for at least 5 min (typically 20–30 min following the formation of whole-cell configuration), 100 μM Cr was bath-applied for 6 min. Typically, Cr was applied for a second time following washout to reconfirm the effects. Synaptosomes were isolated by Ficoll/sucrose density-gradient centrifugation . Whole brains from adult male mice were homogenized with 15 strokes at 900 rpm in buffer A (320 mM sucrose, 1 mM EDTA, 1 mM EGTA, 10 mM Tris–HCl, pH 7.4, with a complete protease inhibitor cocktail; Roche). The homogenate (H fraction) was centrifuged at 1000 × g for 10 min to precipitate the membrane fragments and nuclei (P1 fraction). Supernatant was centrifuged again at 1000 × g for 10 min, and the resulting supernatant (S1) was centrifuged at 12,000 × g for 20 min. Supernatant was the S2 fraction, and the pellet was resuspended with buffer A and centrifuged at 12,000 × g for 20 min. The resulting pellet was crude synaptosomes (P2 fraction), containing synaptosomes with mitochondria and microsomes. Crude synaptosomes (P2 fraction) was resuspended with 150–200 μl buffer B (320 mM sucrose and 10 mM Tris–HCI [pH 7.4]). The sample was carefully overlaid on the top of a gradient of 2 ml of 7.5% (wt/vol in buffer B) Ficoll and 1.8 ml of 13% (wt/vol in buffer B) Ficoll and centrifuged at 98,000 × g for 45 min at 2–4°C in a swinging-bucket rotor. A myelin band was present near the surface, and the synaptosomes band (fraction Sy) was present at the interface between the 13 and 7.5% Ficoll layers, with the mitochondria being pelleted at the bottom. For further western analysis, the supernatant was divided into six fractions (600 μl for each fraction) and the mitochondria pellet was discarded. The isolated synaptosomes was included in fraction 4. For western analysis, fractions H, S1, P1, S2, P2, and Sy were adjusted to 0.5 mg/ml by bicinchoninic acid assay (BCA) method with reference to NanoDrop 2000 Spectrophotometers. 3.35 μg protein was loaded for each lane. Fractions 1–6 were loaded with the same volume (10 μl composed of 6.7 μl sample and 3.3 μl loading buffer) for each lane. To remove Ficoll, we diluted the synaptosomal band (480 μl) with 4.3 ml of a pH 7.4 buffer C containing (in mM) 240 mannitol, 10 glucose, 4.8 potassium gluconate, 2.2 calcium gluconate, 1.2 MgSO 4 , 1.2 KH 2 PO 4 , and 25 HEPES-Tris. The sample was then centrifuged at 12,000 × g, and the pellet was resuspended with buffer C. Uptake experiments were either performed at 37°C or at 0°C (control). For each sample, 25–43 μg of synaptosomes (with a volume of 40–50 μl) were added to 360 μl buffer containing (in mmol/l) 100 NaCl, 40 mannitol, 10 glucose, 4.8 potassium gluconate, 2.2 calcium gluconate, 1.2 MgSO 4 , 1.2 KH 2 PO 4 , 25 HEPES, and 25 Tris (pH adjusted to 7.4). A mixture of 18 μM [ 14 C]-creatine (0.4 μCi) and 5 μM creatine was quickly added. After 10 min, uptake was terminated by the addition of 1 ml of NaCl-free ice-cold buffer C. Samples were immediately filtered, under vacuum, through a Whatman GF/C glass filter (1825-025) pre-wetted with buffer C. Filters were further washed with 10 ml of ice-cold buffer C, dissolved in scintillation fluid, and the radioactivity determined by liquid scintillation spectrometry. The uptake of 13 C-creatine was assayed according to a conventional procedure with slight modifications: the immunoisolated SVs by 10 μg Syp antibody (101011, SySy) were resuspended with the uptake buffer (150 mM meglumine-tartrate, 4 mM KCl, 4 mM MgSO 4 , 10 mM HEPES-KOH [pH 7.4], and cOmplete EDTA-free protease inhibitor cocktail) containing 4 mM Mg-ATP or additional 4 mM MgSO 4 , followed by preincubation for 30 min at 25℃. The uptake reaction was started by addition of 1 mM 13 C-creatine dissolved in the uptake buffer with a final volume of 125 μl (pH at 6.8). After 10 min at 25℃, 1 ml of ice-cold uptake buffer was added to the incubation to stop the reaction, followed by five more times washing. The SV contents were extracted using the protocol described in the determination of vesicular contents part. Then, 100 nM Cr was used as the internal control. CE-MS and LC-MS were used to verify and quantify the creatine contents of samples. A Vanquish UHPLC system coupled to a Q Exactive HF-X mass spectrometer (both instrument from Thermo Fisher Scientific) was used for LC-MS analysis along with SeQuant ZIC-HILIC column (150 mm × 2.1 mm, 3.5 μm, Merck Millipore, 150442) in the positive mode and SeQuant ZIC-pHILIC column (150 mm × 2.1 mm, 5 μm, Merck Millipore, 150460) in the negative mode. For ZIC-HILIC column, the mobile phase A was 0.1% formic acid in water and the mobile phase B was 0.1% formic acid in acetonitrile. The linear gradient was as follows: 0 min, 80% B; 6 min, 50% B; 13 min, 50% B; 14 min, 20% B; 18 min, 20% B; 18.5 min, 80% B; and 30 min, 80% B. The flow rate used was 300 μl/min and the column temperature was maintained at 30°C. For ZIC-pHILIC column, the mobile phase A is 20 mM ammonium carbonate in water, adjusted to pH 9.0 with 0.1% ammonium hydroxide solution (25%), and the mobile phase B is 100% acetonitrile. The linear gradient was as follows: 0 min, 80% B; 2 min, 80% B; 19 min, 20% B; 20 min, 80% B; and 30 min, 80% B. The flow rate used was 150 μl/min, and the column temperature was 25°C. Samples were maintained at 4°C in Vanquish autosampler. Then, 3 µl of extracted metabolites were injected for each run. IP samples were subjected to ZIC-HILIC column in positive mode for major metabolites detection, and then subject to ZIC-pHILIC column in negative mode for orthogonal detection. |
Mitigation
Strategies against Antibody Aggregation
Induced by Oleic Acid in Liquid Formulations | 6be00237-732a-494d-bb11-1ba8a9a13ba7 | 11539069 | Pharmacology[mh] | Introduction The stability of therapeutic proteins against aggregation is a major quality concern during the drug lifecycle, as the presence of particles is highly restricted by regulatory authorities due to potential adverse health effects. − An important pathway for protein particle formation proceeds through adsorption, conformational perturbation, film formation, and rupture at hydrophobic interfaces such as air and silicone oil. − These interfaces are ubiquitous in the processing and delivery of biologics and often act in synergy with hydrodynamic flow. The nonionic surfactants polysorbate 20 and 80 (PS20 and PS80, respectively) are the most common stabilizers in parenteral formulations of biologics to prevent interfacial adsorption of the drug by competing for the interface. − The main component of polysorbates is an ethoxylated sorbitan headgroup that is esterified with 1–4 fatty acids (FAs) of variable chain lengths. Oleic acid (OA) with a chain length of 18 is the most abundant FA in PS80, with a minimum content of 58.0% required by the European, Japanese, and United States Pharmacopoeias for multicompendial PS80, and actual levels ranging from 68 to 97%. , A major limitation of PS20 and PS80 is their susceptibility to degradation by oxidation and hydrolysis. The latter degradation pathway is primarily catalyzed by trace amounts of host cell proteins (HCPs) such as esterases and lipases and results in the release of free fatty acids (FFAs). − Given the typical concentration of PS80 in pharmaceutical formulations (∼0.5 mg mL –1 ) and the low solubility limit of oleic acid of 2–5 μg mL –1 , the hydrolysis of even a small fraction (approximately 5–10%) of intact PS80 could lead to oleic acid concentrations that exceed its solubility limit. Above their solubility limits, FFAs undergo phase separation, causing the formation of particles or droplets at ambient conditions, according to their melting points. Although it has been suggested that the presence of FFAs could destabilize proteins and promote the formation of proteinaceous particles, the mechanisms underlying this process remain unclear. , − This study focuses on OA, the primary FFA resulting from PS80 hydrolysis, which is found as liquid droplets above its melting temperature of 13–14 °C. While the p K a value for most short-chain carboxylic acids in water is approximately 4.8 , significantly higher apparent p K a values ranging from 5.7 to 9.8 were reported for OA. , − This effect is attributed to the self-association of fatty acid molecules, leading to complexes with high negative surface charge densities, which exhibit a complex, pH-dependent phase behavior. , − In the pH range of 4.8–8.0 reported for commercially available antibody formulations, OA polar head groups are therefore in equilibrium between the protonated and deprotonated forms. These considerations are relevant because the polarity of different oil–water interfaces has previously been shown to affect the interfacial adsorption behavior of globular proteins and surfactants. − While the interaction between therapeutic proteins and fluid hydrophobic interfaces, such as silicone oil (SO)- and air–water interfaces, has been extensively investigated, there has been considerably less focus on interactions between therapeutic proteins and FFAs. Recent approaches involved spiking formulations with OA or hydrolyzed PS80 before performing quiescent incubation or agitation studies in the presence of air–water interfaces. , These methodologies, however, can pose challenges in identifying and characterizing destabilizing mechanisms due to the lag time between the application of stress and subsequent analysis. In this work, we apply a recently developed microfluidic platform to investigate the interactions of therapeutic antibodies at concentrations up to 180 mg mL –1 with the liquid OA–water interface on short time scales. We further explore the effect of varying levels of intact PS80, sodium chloride, guanidinium hydrochloride, l -histidine ( l -His), l -lysine ( l -Lys), and l -arginine ( l -Arg) at different pH values and consider two distinct mAbs of biopharmaceutical origin. In analogy with recent findings obtained at the SO–water interface, we show that antibodies rapidly adsorb to the polar liquid OA interface, forming a viscoelastic protein layer that can lead to particle formation upon mechanical rupture. We find that the propensity to form this viscoelastic protein layer varies for different antibodies . We further demonstrate that the effectiveness of intact PS80 in preventing antibody adsorption is influenced by surfactant and antibody concentrations as well as formulation pH. Importantly, we find that the amino acid l -Arg effectively prevents interfacial layer and particle formation even at high antibody concentrations. Materials and Methods 2.1 Materials Recombinant humanized monoclonal antibodies mAb1 and mAb2 (Janssen, Schaffhausen, Switzerland) were provided in stock formulations containing >150 mg mL –1 protein. Unless stated otherwise, the samples were prepared in buffer containing 6% trehalose dihydrate (abcr GmbH, Karlsruhe, Germany), 44 mM sodium phosphate dibasic (Sigma-Aldrich, reag. Ph. Eur., St. Louis, MO, USA), and 10 mM citric acid (Sigma-Aldrich, reag. Ph. Eur., St. Louis, MO, USA) at pH 6.4 or 5.0. Alternatively, they were prepared in buffer containing 6% trehalose dihydrate and 25 or 130 mM l -His (Sigma-Aldrich, reag. Ph. Eur., St. Louis, MO, USA) at pH 6.4. Buffer exchanges were performed by diluting stock formulations in the target buffer and performing dialysis in either 500 μL membrane centrifugal concentrators (MWCO 50 kDa, Vivaspin 500, Sartorius, UK), dialysis cassettes (Slide-A-Lyzer, MWCO 7 kDa, Thermo Scientific, IL, USA), or using 10 mL Float-A-Lyzer devices (MWCO 100 kDa, Sigma-Aldrich, USA) according to the manufacturer’s instructions in 1 L of target buffer with at least one intermediate buffer change before measuring the pH values of dialyzed solutions. The buffers were prepared using ultrapure water (Milli-Q Synergy Water Purification System, Merck Millipore, MA, USA) and filtered using a Nalgene vacuum filtration system (ThermoFisher Scientific, USA) and 0.45 μm Durapore PVDF filter membranes (Merck, Germany). Super refined PS80 was obtained from Croda Inc. (Edison, New Jersey, USA). PS80 stock solutions were prepared in buffer at pH 6.4 at 1% (w/v) and stored at −20 °C as 1 mL aliquots, thawed on the day of use, and further diluted to prepare the formulations. Surfactant concentrations are reported hereafter as % (w/v). Oleic acid (97%, Acros Organics, USA) was stored at 5 °C and thawed at room temperature on the day of use. mAb samples were filtered using 0.2 μm cutoff syringe filters (Millex Syringe Driven Filter Unit, Japan). The pH of buffers (6% trehalose dihydrate, 44 mM sodium phosphate dibasic, and 10 mM citric acid, pH 6.4) containing l -Arg, l -His, l -Lys, and guanidinium hydrochloride (GdnHCl) (BioUltra, Sigma-Aldrich, St. Louis, MO, USA) was adjusted using aqueous HCl. 8-Anilino-1-naphtalenesulfonic acid (ANS) was purchased from TCI Europe, Belgium. Protein samples were supplemented with ANS at 25 μM from a 500 μM aqueous stock solution, prepared from 10 mM ANS in DMSO (ACROS Organics, 99.7% extra dry over molecular sieves, Thermo Scientific, Waltham, MA, USA) on the day of the experiment, and used within a day. 2.2 Antibody Labeling mAb1 and mAb2 were labeled at pH 6.4 with an Alexa Fluor 647 (AF647) N -hydroxy succinimidyl (NHS) ester (Conjugation Kit, Lightning-Link, abcam, Cambridge, UK) following the supplier’s specifications. Briefly, 100 μL of antibody at 1 mg mL –1 supplemented with 10 μL of modifier reagent was incubated with lyophilized dye for 1 h at ambient temperature in the absence of light before the addition of 10 μL of quencher reagent. The conjugates were stored at 5 °C and used without further purification. mAb formulations at 30 mg mL –1 were supplemented with labeled mAb1-Alexa Fluor 647 to achieve a molar ratio of 1:700 of the labeled and unlabeled antibodies. 2.3 Fabrication and Operation of Microfluidic Chips Master molds were produced in-house by spin-coating SU-8 (MicroChemicals, Ulm, Germany) onto a silicon waver, followed by soft baking. Subsequently, the silicon waver was exposed to ultraviolet light with a mask incorporating the chip layout (designed in Auto-CAD 2021) to induce local polymerization prior to postbaking. Standard soft lithography was performed to replicate the chip geometry by pouring a 10:1 mixture of polydimethylsiloxane (PDMS) and curing agent (Sylgard 184, Dow Corning, Midland, MI) onto the mold, followed by degassing (1 h) and baking (2 h at 65 °C). The microfluidic chip was bonded to glass slides (Menzel, Braunschweig, Germany) after plasma activation (ZEPTO plasma cleaner, Diener Electronics, Ebhausen, Germany). The microfluidic chips were used within 2 h after bonding. The chip had two inlets, one outlet, and a flow focusing nozzle. The height of the channel was 50 μm, and the width of the nozzle and the first section of the channel was 100 μm. The second section comprised linear channels with a width of 70 μm and 58 expansion regions measuring 200 μm in width and length. The fluid flow inside the channels was modulated using an external syringe pump (Cetoni neMESYS, Cetoni GmbH, Korbussen, Germany) that controlled the movement of a plunger of a 500 μL unsiliconized glass syringe (Hamilton, Reno, NV, USA). The connection to the corresponding inlet of the microfluidic chip was achieved via PTFE tubing (Adtech Polymer Engineering Ltd., Stroud, UK). The protein formulation and the oleic acid flow rates were maintained at 1.2 and 0.125 μL min –1 , respectively. These flow rates correspond to a droplet residence time of 35 s within the microfluidic chip. Image acquisition was started after stable drop formation was established for 15 min, and droplets were collected using a 200 μL gel-loading pipet tip inserted into the chip outlet. Microfluidic chips were designed for single use, and the operation required a minimum sample volume of 50 μL. 2.4 On-Chip Quantification of Droplet Deformation The shape and dimensions of the droplets were extracted from microscopy images captured simultaneously at different expansion regions at a 4× magnification during the chip operation. A minimum of 200 images were acquired per experimental condition and expansion region. The characterization of droplet shapes within the expansion regions involved postimage binarization using a custom MATLAB algorithm, which yielded the width ( w ) and height ( h ) of the droplets. The dimensionless droplet deformation parameter D was then computed as follows: 1 D is equal to 0 for perfect circles. The mean and standard deviation of D were determined for a minimum of five droplets per region and condition. Additional details are provided in the Supporting Information ( Figure S1 ). 2.5 Acquisition of Microscopy Images A Ti2–U inverted microscope (Nikon, Switzerland) equipped with an LED light source (Omicron Laserage Laserprodukte GmbH, Germany), a camera (Zyla sCMOS 4.2P-CL10, Andor, UK), and Nikon CFI Plan Fluor Objectives (4×, 10×, and 20× magnification) was used to acquire images. The acquisition was started after 15 min of stable droplet formation. 2.6 Off-Chip Characterization of Samples Brightfield microscopy images of samples were captured by pipetting approximately 10 μL of the sample, which was collected at the chip outlet using gel-loading tips, onto a glass slide, followed by image acquisition. The extrinsic dye ANS was detected by sample excitation with 365 nm and selecting the emission between 417 and 477 nm. ANS fluorescence images were acquired at 20× magnification, 200 ms integration time, and 770 mW excitation laser power. Alexa Fluor 647 fluorescence images were acquired at 20× magnification, 200 ms (off-chip) or 2 ms (on-chip) integration time, and 260 mW (off-chip) or 425 mW (on-chip) excitation laser power (excitation wavelength: 617 nm, integrated emission wavelengths: 640–690 nm). 2.7 Measurement of Diffusion Interaction Parameter k D The stock solutions of mAb1 and mAb2 at pH 6.4 were diluted to a range of concentrations (4, 7, 10, 13, 17, and 20 mg mL –1 ), filtered (0.02 μm, Whatman Anotop, sterile, inorganic membrane filter, Cytiva, Germany), and triplicate measurements of mutual diffusion coefficients D m were performed by dynamic light scattering (DLS) at 450 nm and 25 °C using a Prometheus Panta (Nanotemper Technologies, Munich, Germany). Samples were prepared by filling standard capillaries with 10 μL of the solution. The relationship of the diffusion interaction parameter k D with the mutual diffusion coefficients D m measured at different antibody concentrations c and the self-diffusion coefficient D 0 is given by the following equation. 2 The values for k D were determined by linear regression of the data of D m vs c to , and their standard deviations were calculated by propagating the standard errors of the fitted coefficients. Data and linear fit are shown in Figure S2 . 2.8 Protein Thermal Stability Characterization The thermal stability of mAb1 and mAb2 at pH 6.4 was determined using a Prometheus Panta (Nanotemper Technologies, Munich, Germany) and standard capillaries filled with 10 μL of an antibody solution at 7 mg mL –1 . The intrinsic fluorescence at 330 and 350 nm upon excitation at 280 nm was measured while the sample was heated from 20 to 90 °C at a rate of 1 °C min –1 . In parallel, the hydrodynamic radius of the sample was monitored using DLS. Data were analyzed using Nanotemper’s analysis software by applying a 2-state fit to the fluorescence ratio at 350 and 330 nm. The unfolding curves of the two mAbs are shown in Figures S3 and S4 . Materials Recombinant humanized monoclonal antibodies mAb1 and mAb2 (Janssen, Schaffhausen, Switzerland) were provided in stock formulations containing >150 mg mL –1 protein. Unless stated otherwise, the samples were prepared in buffer containing 6% trehalose dihydrate (abcr GmbH, Karlsruhe, Germany), 44 mM sodium phosphate dibasic (Sigma-Aldrich, reag. Ph. Eur., St. Louis, MO, USA), and 10 mM citric acid (Sigma-Aldrich, reag. Ph. Eur., St. Louis, MO, USA) at pH 6.4 or 5.0. Alternatively, they were prepared in buffer containing 6% trehalose dihydrate and 25 or 130 mM l -His (Sigma-Aldrich, reag. Ph. Eur., St. Louis, MO, USA) at pH 6.4. Buffer exchanges were performed by diluting stock formulations in the target buffer and performing dialysis in either 500 μL membrane centrifugal concentrators (MWCO 50 kDa, Vivaspin 500, Sartorius, UK), dialysis cassettes (Slide-A-Lyzer, MWCO 7 kDa, Thermo Scientific, IL, USA), or using 10 mL Float-A-Lyzer devices (MWCO 100 kDa, Sigma-Aldrich, USA) according to the manufacturer’s instructions in 1 L of target buffer with at least one intermediate buffer change before measuring the pH values of dialyzed solutions. The buffers were prepared using ultrapure water (Milli-Q Synergy Water Purification System, Merck Millipore, MA, USA) and filtered using a Nalgene vacuum filtration system (ThermoFisher Scientific, USA) and 0.45 μm Durapore PVDF filter membranes (Merck, Germany). Super refined PS80 was obtained from Croda Inc. (Edison, New Jersey, USA). PS80 stock solutions were prepared in buffer at pH 6.4 at 1% (w/v) and stored at −20 °C as 1 mL aliquots, thawed on the day of use, and further diluted to prepare the formulations. Surfactant concentrations are reported hereafter as % (w/v). Oleic acid (97%, Acros Organics, USA) was stored at 5 °C and thawed at room temperature on the day of use. mAb samples were filtered using 0.2 μm cutoff syringe filters (Millex Syringe Driven Filter Unit, Japan). The pH of buffers (6% trehalose dihydrate, 44 mM sodium phosphate dibasic, and 10 mM citric acid, pH 6.4) containing l -Arg, l -His, l -Lys, and guanidinium hydrochloride (GdnHCl) (BioUltra, Sigma-Aldrich, St. Louis, MO, USA) was adjusted using aqueous HCl. 8-Anilino-1-naphtalenesulfonic acid (ANS) was purchased from TCI Europe, Belgium. Protein samples were supplemented with ANS at 25 μM from a 500 μM aqueous stock solution, prepared from 10 mM ANS in DMSO (ACROS Organics, 99.7% extra dry over molecular sieves, Thermo Scientific, Waltham, MA, USA) on the day of the experiment, and used within a day. Antibody Labeling mAb1 and mAb2 were labeled at pH 6.4 with an Alexa Fluor 647 (AF647) N -hydroxy succinimidyl (NHS) ester (Conjugation Kit, Lightning-Link, abcam, Cambridge, UK) following the supplier’s specifications. Briefly, 100 μL of antibody at 1 mg mL –1 supplemented with 10 μL of modifier reagent was incubated with lyophilized dye for 1 h at ambient temperature in the absence of light before the addition of 10 μL of quencher reagent. The conjugates were stored at 5 °C and used without further purification. mAb formulations at 30 mg mL –1 were supplemented with labeled mAb1-Alexa Fluor 647 to achieve a molar ratio of 1:700 of the labeled and unlabeled antibodies. Fabrication and Operation of Microfluidic Chips Master molds were produced in-house by spin-coating SU-8 (MicroChemicals, Ulm, Germany) onto a silicon waver, followed by soft baking. Subsequently, the silicon waver was exposed to ultraviolet light with a mask incorporating the chip layout (designed in Auto-CAD 2021) to induce local polymerization prior to postbaking. Standard soft lithography was performed to replicate the chip geometry by pouring a 10:1 mixture of polydimethylsiloxane (PDMS) and curing agent (Sylgard 184, Dow Corning, Midland, MI) onto the mold, followed by degassing (1 h) and baking (2 h at 65 °C). The microfluidic chip was bonded to glass slides (Menzel, Braunschweig, Germany) after plasma activation (ZEPTO plasma cleaner, Diener Electronics, Ebhausen, Germany). The microfluidic chips were used within 2 h after bonding. The chip had two inlets, one outlet, and a flow focusing nozzle. The height of the channel was 50 μm, and the width of the nozzle and the first section of the channel was 100 μm. The second section comprised linear channels with a width of 70 μm and 58 expansion regions measuring 200 μm in width and length. The fluid flow inside the channels was modulated using an external syringe pump (Cetoni neMESYS, Cetoni GmbH, Korbussen, Germany) that controlled the movement of a plunger of a 500 μL unsiliconized glass syringe (Hamilton, Reno, NV, USA). The connection to the corresponding inlet of the microfluidic chip was achieved via PTFE tubing (Adtech Polymer Engineering Ltd., Stroud, UK). The protein formulation and the oleic acid flow rates were maintained at 1.2 and 0.125 μL min –1 , respectively. These flow rates correspond to a droplet residence time of 35 s within the microfluidic chip. Image acquisition was started after stable drop formation was established for 15 min, and droplets were collected using a 200 μL gel-loading pipet tip inserted into the chip outlet. Microfluidic chips were designed for single use, and the operation required a minimum sample volume of 50 μL. On-Chip Quantification of Droplet Deformation The shape and dimensions of the droplets were extracted from microscopy images captured simultaneously at different expansion regions at a 4× magnification during the chip operation. A minimum of 200 images were acquired per experimental condition and expansion region. The characterization of droplet shapes within the expansion regions involved postimage binarization using a custom MATLAB algorithm, which yielded the width ( w ) and height ( h ) of the droplets. The dimensionless droplet deformation parameter D was then computed as follows: 1 D is equal to 0 for perfect circles. The mean and standard deviation of D were determined for a minimum of five droplets per region and condition. Additional details are provided in the Supporting Information ( Figure S1 ). Acquisition of Microscopy Images A Ti2–U inverted microscope (Nikon, Switzerland) equipped with an LED light source (Omicron Laserage Laserprodukte GmbH, Germany), a camera (Zyla sCMOS 4.2P-CL10, Andor, UK), and Nikon CFI Plan Fluor Objectives (4×, 10×, and 20× magnification) was used to acquire images. The acquisition was started after 15 min of stable droplet formation. Off-Chip Characterization of Samples Brightfield microscopy images of samples were captured by pipetting approximately 10 μL of the sample, which was collected at the chip outlet using gel-loading tips, onto a glass slide, followed by image acquisition. The extrinsic dye ANS was detected by sample excitation with 365 nm and selecting the emission between 417 and 477 nm. ANS fluorescence images were acquired at 20× magnification, 200 ms integration time, and 770 mW excitation laser power. Alexa Fluor 647 fluorescence images were acquired at 20× magnification, 200 ms (off-chip) or 2 ms (on-chip) integration time, and 260 mW (off-chip) or 425 mW (on-chip) excitation laser power (excitation wavelength: 617 nm, integrated emission wavelengths: 640–690 nm). Measurement of Diffusion Interaction Parameter k D The stock solutions of mAb1 and mAb2 at pH 6.4 were diluted to a range of concentrations (4, 7, 10, 13, 17, and 20 mg mL –1 ), filtered (0.02 μm, Whatman Anotop, sterile, inorganic membrane filter, Cytiva, Germany), and triplicate measurements of mutual diffusion coefficients D m were performed by dynamic light scattering (DLS) at 450 nm and 25 °C using a Prometheus Panta (Nanotemper Technologies, Munich, Germany). Samples were prepared by filling standard capillaries with 10 μL of the solution. The relationship of the diffusion interaction parameter k D with the mutual diffusion coefficients D m measured at different antibody concentrations c and the self-diffusion coefficient D 0 is given by the following equation. 2 The values for k D were determined by linear regression of the data of D m vs c to , and their standard deviations were calculated by propagating the standard errors of the fitted coefficients. Data and linear fit are shown in Figure S2 . Protein Thermal Stability Characterization The thermal stability of mAb1 and mAb2 at pH 6.4 was determined using a Prometheus Panta (Nanotemper Technologies, Munich, Germany) and standard capillaries filled with 10 μL of an antibody solution at 7 mg mL –1 . The intrinsic fluorescence at 330 and 350 nm upon excitation at 280 nm was measured while the sample was heated from 20 to 90 °C at a rate of 1 °C min –1 . In parallel, the hydrodynamic radius of the sample was monitored using DLS. Data were analyzed using Nanotemper’s analysis software by applying a 2-state fit to the fluorescence ratio at 350 and 330 nm. The unfolding curves of the two mAbs are shown in Figures S3 and S4 . Results and Discussion 3.1 Polarity and pH-Dependent Phase Behavior of Oleic Acid a shows the molecular structures of OA and other hydrophobic liquids with a polarity ranking according to the interfacial liquid tension (IFT). This interfacial property depends on molecular characteristics including dipole moment and ionization potential and has previously been shown to predict the adsorption behavior of globular proteins and surfactants, as well as the strength of adsorbed viscoelastic protein layers at liquid–liquid interfaces. − b schematically illustrates the phase behavior of OA above its solubility limit (reported values range between 2 and 5 μg mL –1 in pharmaceutical buffers) in the pH range characteristic for antibody formulations (4.8–8.0) and above its melting temperature of 13–14 °C. Prolonged exposure of mAb products to temperatures above the OA melting point can occur in scenarios such as temperature stability and photostability studies or during analytical testing and handling of drug products such as administration. Oil droplets and lamellar assemblies were observed in aqueous environments at pH < 8. , The apparent p K a value of palmitic acid (with a chain length of 16), an FFA related to the degradation of PS20, was determined to be 7 in formulations containing both antibodies and polysorbates. This finding provides a lower bound of ≥7 for the apparent p K a value of OA under pharmaceutically relevant solution conditions, considering that the value tends to increase with the length of the carbon chain. At pH values ≤5, significantly lower than the apparent p K a (∼7), OA can thus be expected to be fully protonated, whereas at higher pH values, OA is expected to be partially or fully ionized ( Figure S5 ). Deprotonation causes an increase in the interfacial net-charge and attractive ion–dipole interactions between OA carboxyl groups, which results in a decrease in the intermolecular distances between OA molecules at the interface and potentially modulates the interaction strength between the OA interface and adsorbing protein and surfactant molecules. 3.2 Antibodies Rapidly Adsorb and Form a Viscoelastic Layer on Oleic Acid Droplets We have recently developed a microfluidic droplet device capable of simultaneously probing the adsorption, viscoelastic layer formation, and aggregation of proteins at the SO–water interface on short time scales. A schematic representation of the microfluidic chip is shown in a. Monodisperse micrometer-sized OA droplets are formed in protein formulations at pH 6.4 and 5. OA droplets travel as vertically squeezed plugs inside the channel, wherein they experience a total of 58 expansion and compression cycles. These regions allow for the repeated shape relaxation of the protein-loaded droplet, while at the same time enabling the detection of the rheological response of the droplet interface to the flow field. Brightfield microscopy images are acquired at multiple positions corresponding to droplet residence times in the range of 0.3 and 35 s. The samples can be further analyzed off-chip by collecting droplets at the outlet of the microfluidic chip. We first generated OA droplets in the presence of 30 mg mL –1 mAb1 at both pH 5 and 6.4, which remained colloidally stable when collected at the chip outlet ( b). In contrast, droplets generated in buffer readily merged into larger ones ( c). These observations demonstrate the rapid adsorption of antibodies at the interface of OA droplets, which effectively stabilizes them against coalescence. The protein adsorption results in the progressive formation of a viscoelastic protein layer, which is demonstrated by the restricted droplet relaxation into discs within expansion regions ( d and Movie S1 ). This elongational deformation was absent when OA droplets were formed in the buffer alone. The transition from fluid-like to viscoelastic behavior is likely driven by conformational rearrangements and subsequent interactions between adsorbed proteins. This phenomenon has previously been shown to occur at air–water and various oil–water interfaces, resulting in rigid protein layers that modulated the deformation of oil droplets subjected to shear flows. , − The formation and presence of viscoelastic antibody layers at fluid interfaces are critical in the context of biopharmaceuticals, as they precede the formation of proteinaceous particles upon mechanical rupture. , , Indeed, upon mechanical perturbation by centrifugation (5000 × g , 5 min) of OA droplets formed in the presence of mAb1, we observed the presence of wrinkles that formed via out-of-plane deformations (buckling) of the OA interface ( e). Moreover, particles were released that could be stained by the extrinsic dye ANS, which reports on protein unfolding and aggregation. Increasing the extent of mechanical perturbation by centrifugation led to a drastic increase in the formation of particles ( Figure S6 ). 3.3 Effect of Oil Polarity and Protein Physicochemical Properties on Viscoelastic Layer Formation The adsorption and formation of viscoelastic layers around liquid interfaces are influenced by the physicochemical properties of the protein. For instance, the globular protein beta-lactoglobulin (BLG) has been shown to form stronger viscoelastic layers than bovine serum albumin (BSA) at oil interfaces of different polarity. , These differences have been attributed to the lower thermodynamic stability, smaller size, and more negative net charge of BLG compared to BSA. Differences have also been observed within the same class of therapeutic proteins (IgG1), showing stronger interfacial layers for the antibody with a higher propensity to form aggregates. We therefore compared the formation of viscoelastic layers of two mAbs of pharmaceutical origin, mAb1 and mAb2 , around oils of different polarities, namely OA and SO, the latter representing the most common liquid–liquid interface found in the context of biotherapeutic drug delivery. , , The interfacial stability of both mAbs has previously been evaluated and showed superior stability of mAb2 over mAb1 at all interfaces tested, including air–water and solid–water interfaces with varying levels of negative charge and hydrophobicity. The two IgG1s exhibited significant differences in zeta potentials and dipole moments and were similar in terms of computed hydrophobic patch areas, measured melting temperatures T m , and diffusion interaction parameters k D , which is considered a good predictor of problematic solution behavior such as viscosity and opalescence. The calculated and measured physicochemical properties of the two mAbs are summarized in . The isoelectric points of mAb1 and mAb2 were close to the upper and lower bounds of typical values (6.5–9.5) of approved mAbs with favorable solution behavior, whereas the negative k D values indicated a tendency for self-association at higher concentrations. Both mAbs exhibited apparent melting temperatures higher than typical cutoff values used to flag problematic mAbs during developability assessment. We quantified the extent of deformation of OA droplets formed in protein formulations in different expansion regions corresponding to residence times ranging between 0.3 and 35 s using the dimensionless parameter D = ( w – h )/( w + h ), where w and h represent the major and minor axes of the droplet, respectively (see Methods and Figure S1 ). a–c shows the extent of deformation of SO and OA droplets in the presence of mAb1 and mAb2 at 30 mg mL –1 and pH 6.4. The data of mAb1 at the SO interface were taken from ref . The deformation of SO droplets increased with the residence time for both mAbs and reached a plateau after approximately 20–30 s, which can therefore be considered the characteristic time scale for viscoelastic layer formation under these conditions. At the SO interface, mAb2 showed significantly less deformation (−30%) compared to mAb1. Essentially, no layer formation was detected for mAb2 at the OA–water interface, while mAb1 formed a rigid layer whose strength showed a plateau after a similar time scale as with SO ( b,c). We confirmed these results by spiking the samples with Alexa Fluor 647-labeled antibodies at a 1:700 ratio of labeled to unlabeled mAb and observing their on-chip adsorption at the OA–water interface. In agreement with the deformation data, OA droplets in the presence of mAb1 showed a bright fluorescence rim, whereas no adsorption of mAb2 could be detected ( d). Moreover, droplets formed in the presence of mAb2 underwent rapid coalescence events off-chip, further demonstrating negligible or no adsorption ( Movie S2 ). Next, we assessed whether the deformation data could correlate with particle formation under conditions close to those of product formulations. To this aim, we applied mechanical perturbation using a stress testing which simulates conditions close to drug administration by syringes. Specifically, OA droplets formed in mAb1 and mAb2 formulations were subjected to pumping cycles using pharmaceutical plastic syringes. Consistent with the droplet deformation data and the fluorescence signal measured on-chip, OA droplets formed in the mAb1 formulation shed proteinaceous particles into solution upon syringe pumping, while no particles were observed in the mAb2 formulation ( Figure S7 ). These data further agree with the previously reported superior stability of mAb2 at air–water and different solid–liquid interfaces. At pH 6.4, mAb2 has a lower net charge compared to that of mAb1, which, in combination with its significantly lower dipole moment, may reduce adsorption at the polar and negatively charged OA interface. Here, stability differences within antibodies may be more pronounced than at hydrophobic interfaces, such as air and SO–water, since proteins can populate more native conformations compared to non-native structures generated at hydrophobic interfaces. Protein surface properties are therefore important to modulate interactions at polar interfaces, including electrostatic and dipole–dipole interactions, but may have smaller effects on non-native, hydrophobic interactions at nonpolar interfaces. The results illustrate that the destabilizing effect of OA is dependent on the specific mAb, and that care should be taken with antibodies that exhibit high zeta potentials and dipole moments in formulations at pH values where OA is expected to bear a net negative charge. Interestingly, the interfacial stability of the two mAbs differs despite their similar bulk stability behavior assessed by k D and T m . 3.4 The Formation of a Viscoelastic Layer around OA Droplets is Modulated by pH and PS80 We then investigated the influence of pH and intact PS80 concentration on the kinetics and extent of the protein layer formation. The deformation of droplets increased with residence time and reached an apparent plateau after approximately 30 s ( a,b) at both pH 5.0 and 6.4. The deformation plateau value of mAb1 at pH 6.4 was higher than at pH 5, indicating that adsorbed proteins formed a slightly stronger viscoelastic layer when formed at the partially ionized OA interface, which may be caused by a denser packing of proteins. , At pH 5, OA is expected to be fully protonated in contrast to pH 6.4, where a degree of dissociation of about 20% was computed using the Henderson–Hasselbalch relationship ( Figure S5 ). For the same mAb and buffer system, PS80 prevented the formation of a viscoelastic layer at the nonpolar SO interface at concentrations above its critical micelle concentration (cmc). In the presence of 0.01% PS80, a concentration 6-fold higher than its cmc (0.0017% ), the formation of a protein layer at the OA interface was only partially prevented at pH 6.4 ( a), whereas complete protection was observed at pH 5.0 ( b). A PS80 concentration of 0.05% (30-fold higher than cmc) was required to completely prevent the formation of a viscoelastic protein layer at pH 6.4. We note that in commercial antibody products, PS80 is formulated at concentrations ranging from 0.6- to 120-fold its cmc, with the majority equal or below 30-fold. The higher concentration of PS80 required to compete with antibody adsorption at the OA–water interface compared to the SO–water interface can be explained by the higher polarity, which leads to weaker interactions. Additionally, oils with increased polarity can interact more strongly with water via H-bonds and polar−π interactions. These attractive interactions result in a competition between the oil and surfactant molecule for interfacial adsorption, leading to a reduction in the maximum interfacial concentrations of the surfactant. At pH 6.4, partially ionized OA bears a net negative charge and exhibits increased polarity compared to pH 5, causing strong ion–dipole interactions between the ionized and protonated carboxyl groups on the OA interface with each other ( b) and surrounding water molecules. , Thus, the affinity of the intact PS80 surfactant for the polar OA interface may be reduced, allowing antibody molecules to compete more effectively for interfacial adsorption. Consequently, mAbs form a viscoelastic layer, although with lower strength compared to a surfactant-free system, likely due to the presence of coadsorbed, intercalating PS80. , We also tested intermediate PS80 concentrations between 0.01 and 0.05% at pH 6.4, observing a monotonic decrease of the plateau value with increasing PS80 concentrations ( c). Interestingly, for 0.01, 0.02, and 0.03% PS80 concentrations, the droplet deformation profiles exhibited an apparent lag phase during the first 10 s ( a and S8 ). This lag phase can be explained by considering the faster adsorption of PS80 compared to the mAb. The mAb is initially excluded from the interface as a result of the kinetic competition with the surfactant. Assuming that antibody adsorption is essentially irreversible and that surfactant adsorption is reversible and characterized by a high desorption rate constant, the antibody can subsequently accumulate at the interface and form a viscoelastic layer. This mechanism is supported by a model based on Langmuir adsorption which describes the experimental data ( a, S9–10, eqs S4–S6, and Table S1 ). 3.5 l -Arg Prevents Antibody Adsorption and Viscoelastic Layer Formation In the previous section, we showed that, under certain buffer conditions and surfactant concentrations, intact PS80 alone may not be sufficient to protect antibodies at the OA–water interface. We therefore analyzed the effect of other excipients commonly present in the antibody formulations. The addition of 130 mM sodium chloride (NaCl) to a surfactant-free formulation decreased the strength of the viscoelastic layer but could not suppress its formation ( a). Even a 2-fold increase in the concentration (260 mM NaCl) led only to a further marginal decrease in the deformation plateau ( Figure S11a ). These results suggest that shielding of attractive electrostatic interactions between the interface and mAb1 alone may not be sufficient to prevent protein adsorption and layer formation. We next investigated the effect of l -arginine ( l -Arg), which is widely used in therapeutic antibody formulations due to its favorable effects on protein bulk properties, such as reduction of solution viscosity and aggregation. − However, to the best of our knowledge, the effect of l -Arg on the antibody interfacial stability at high concentrations has not been investigated. l -Arg at pH 6.4 exhibits various features, including a positive net charge that can modulate electrostatic interactions, as well as a deprotonated carboxyl group, a protonated amino group, and a guanidinium group, which allow the formation of intermolecular H-bonds and interactions with aromatic side chains. We tested the effect of l -Arg at concentrations between 13 and 130 mM, which correspond to the typical range in approved antibody products. The time-dependent deformation values in the presence of 130 mM l -Arg at pH 6.4 in the absence of PS80 are shown in a, while the plateau values at the end of the device as a function of l -Arg concentration are shown in b (see Figure S12 for full data sets). Increasing the concentration of l -Arg led to a gradual decrease of the strength of the interfacial layer, whose formation was completely inhibited at 130 mM. Interestingly, the formation of the layer was not completely inhibited when the formulation was supplemented with either 130 mM guanidine hydrochloride (GdnHCl), l -lysine ( l -Lys) or a 1:1 mixture of the two each at 130 mM, suggesting that the combination of a positively charged amino group and guanidinium group in the same molecule, as present in l -Arg, is key to effectively prevent protein destabilization at this interface ( Figure S11b ). Analysis of samples spiked with mAb1-Alexa Fluor 647 by fluorescence microscopy showed that the addition of 130 mM l -Arg prevented the formation of a fluorescent protein rim ( c,d). These results indicate that l -Arg avoids antibody adsorption, with possible mechanisms including the shielding of attractive electrostatic interactions, the modulation of H-bonds, and the interaction with exposed hydrophobic groups of the antibodies. Given the observed effect of l -Arg, we analyzed the impact of 25 mM l -His, a commonly used buffer with a partially protonated imidazole side chain. We observed the same results obtained with sodium phosphate buffer ( Figure S13 ). Moreover, an increase in the concentration of l -His to 130 mM decreased the strength of the viscoelastic layer strength to an extent similar to NaCl and l -Lys ( Figure S11 ), indicating the absence of a protective effect of l -His in addition to electrostatics. 3.6 l -Arg Mitigates Protein Particle Formation at OA–Water Interfaces in High Antibody Concentration Formulations The experiments discussed in the previous sections were performed at 30 mg mL –1 . To analyze the effect of OA in conditions that are closer to formulations for subcutaneous administration, we measured the stability of the two antibodies at 180 mg mL –1 concentration at pH 6.4 in the absence of PS80. For mAb1, we observed progressive buckling of the viscoelastic protein layer and the shedding of particles into solution around OA droplets when traveling through the compression and expansion zones of the chip ( a and Movie S3 ). Moreover, even after 1 h of incubation, the OA droplets collected at the end of the chip did not relax back to a spherical shape and buckles remained at their interface, indicating the irreversible formation and collapse of the protein layer into solid-like particles. In contrast, for mAb2, OA droplets exhibited much less buckling on-chip and fully relaxed to a spherical shape off-chip ( b), showing no collapse of the adsorbed protein layer. The difference between the two antibodies at a high concentration is therefore consistent with the experiments performed at a lower concentration of 30 mg mL –1 (Figures b–d and S7 ). The addition of 0.01% PS80 delayed but did not suppress the onset of buckling of the protein layer at the OA–water interface ( c). Buckling was still observed even at 0.05% PS80 at 180 mg mL –1 ( Figure S14a ) antibody concentration, in contrast to the results previously obtained at 30 mg mL –1 ( a). These observations indicate that at the higher protein concentration, the surfactant was only partially able to compete with the antibody for adsorption at the OA–water interface. This behavior was qualitatively predicted by the Langmuir model ( eqs S4–S6 and Figure S10c,d ), showing that a 6-fold increase in antibody concentration results in a rapid decay of surfactant coverage. However, consistent with the same experiment with 30 mg mL –1 mAb1, the presence of 130 mM l -Arg, in both the absence and presence of 0.01% PS80 ( d and S14b ), effectively prevented buckling and particle formation at 180 mg mL –1 mAb1 concentration. Polarity and pH-Dependent Phase Behavior of Oleic Acid a shows the molecular structures of OA and other hydrophobic liquids with a polarity ranking according to the interfacial liquid tension (IFT). This interfacial property depends on molecular characteristics including dipole moment and ionization potential and has previously been shown to predict the adsorption behavior of globular proteins and surfactants, as well as the strength of adsorbed viscoelastic protein layers at liquid–liquid interfaces. − b schematically illustrates the phase behavior of OA above its solubility limit (reported values range between 2 and 5 μg mL –1 in pharmaceutical buffers) in the pH range characteristic for antibody formulations (4.8–8.0) and above its melting temperature of 13–14 °C. Prolonged exposure of mAb products to temperatures above the OA melting point can occur in scenarios such as temperature stability and photostability studies or during analytical testing and handling of drug products such as administration. Oil droplets and lamellar assemblies were observed in aqueous environments at pH < 8. , The apparent p K a value of palmitic acid (with a chain length of 16), an FFA related to the degradation of PS20, was determined to be 7 in formulations containing both antibodies and polysorbates. This finding provides a lower bound of ≥7 for the apparent p K a value of OA under pharmaceutically relevant solution conditions, considering that the value tends to increase with the length of the carbon chain. At pH values ≤5, significantly lower than the apparent p K a (∼7), OA can thus be expected to be fully protonated, whereas at higher pH values, OA is expected to be partially or fully ionized ( Figure S5 ). Deprotonation causes an increase in the interfacial net-charge and attractive ion–dipole interactions between OA carboxyl groups, which results in a decrease in the intermolecular distances between OA molecules at the interface and potentially modulates the interaction strength between the OA interface and adsorbing protein and surfactant molecules. Antibodies Rapidly Adsorb and Form a Viscoelastic Layer on Oleic Acid Droplets We have recently developed a microfluidic droplet device capable of simultaneously probing the adsorption, viscoelastic layer formation, and aggregation of proteins at the SO–water interface on short time scales. A schematic representation of the microfluidic chip is shown in a. Monodisperse micrometer-sized OA droplets are formed in protein formulations at pH 6.4 and 5. OA droplets travel as vertically squeezed plugs inside the channel, wherein they experience a total of 58 expansion and compression cycles. These regions allow for the repeated shape relaxation of the protein-loaded droplet, while at the same time enabling the detection of the rheological response of the droplet interface to the flow field. Brightfield microscopy images are acquired at multiple positions corresponding to droplet residence times in the range of 0.3 and 35 s. The samples can be further analyzed off-chip by collecting droplets at the outlet of the microfluidic chip. We first generated OA droplets in the presence of 30 mg mL –1 mAb1 at both pH 5 and 6.4, which remained colloidally stable when collected at the chip outlet ( b). In contrast, droplets generated in buffer readily merged into larger ones ( c). These observations demonstrate the rapid adsorption of antibodies at the interface of OA droplets, which effectively stabilizes them against coalescence. The protein adsorption results in the progressive formation of a viscoelastic protein layer, which is demonstrated by the restricted droplet relaxation into discs within expansion regions ( d and Movie S1 ). This elongational deformation was absent when OA droplets were formed in the buffer alone. The transition from fluid-like to viscoelastic behavior is likely driven by conformational rearrangements and subsequent interactions between adsorbed proteins. This phenomenon has previously been shown to occur at air–water and various oil–water interfaces, resulting in rigid protein layers that modulated the deformation of oil droplets subjected to shear flows. , − The formation and presence of viscoelastic antibody layers at fluid interfaces are critical in the context of biopharmaceuticals, as they precede the formation of proteinaceous particles upon mechanical rupture. , , Indeed, upon mechanical perturbation by centrifugation (5000 × g , 5 min) of OA droplets formed in the presence of mAb1, we observed the presence of wrinkles that formed via out-of-plane deformations (buckling) of the OA interface ( e). Moreover, particles were released that could be stained by the extrinsic dye ANS, which reports on protein unfolding and aggregation. Increasing the extent of mechanical perturbation by centrifugation led to a drastic increase in the formation of particles ( Figure S6 ). Effect of Oil Polarity and Protein Physicochemical Properties on Viscoelastic Layer Formation The adsorption and formation of viscoelastic layers around liquid interfaces are influenced by the physicochemical properties of the protein. For instance, the globular protein beta-lactoglobulin (BLG) has been shown to form stronger viscoelastic layers than bovine serum albumin (BSA) at oil interfaces of different polarity. , These differences have been attributed to the lower thermodynamic stability, smaller size, and more negative net charge of BLG compared to BSA. Differences have also been observed within the same class of therapeutic proteins (IgG1), showing stronger interfacial layers for the antibody with a higher propensity to form aggregates. We therefore compared the formation of viscoelastic layers of two mAbs of pharmaceutical origin, mAb1 and mAb2 , around oils of different polarities, namely OA and SO, the latter representing the most common liquid–liquid interface found in the context of biotherapeutic drug delivery. , , The interfacial stability of both mAbs has previously been evaluated and showed superior stability of mAb2 over mAb1 at all interfaces tested, including air–water and solid–water interfaces with varying levels of negative charge and hydrophobicity. The two IgG1s exhibited significant differences in zeta potentials and dipole moments and were similar in terms of computed hydrophobic patch areas, measured melting temperatures T m , and diffusion interaction parameters k D , which is considered a good predictor of problematic solution behavior such as viscosity and opalescence. The calculated and measured physicochemical properties of the two mAbs are summarized in . The isoelectric points of mAb1 and mAb2 were close to the upper and lower bounds of typical values (6.5–9.5) of approved mAbs with favorable solution behavior, whereas the negative k D values indicated a tendency for self-association at higher concentrations. Both mAbs exhibited apparent melting temperatures higher than typical cutoff values used to flag problematic mAbs during developability assessment. We quantified the extent of deformation of OA droplets formed in protein formulations in different expansion regions corresponding to residence times ranging between 0.3 and 35 s using the dimensionless parameter D = ( w – h )/( w + h ), where w and h represent the major and minor axes of the droplet, respectively (see Methods and Figure S1 ). a–c shows the extent of deformation of SO and OA droplets in the presence of mAb1 and mAb2 at 30 mg mL –1 and pH 6.4. The data of mAb1 at the SO interface were taken from ref . The deformation of SO droplets increased with the residence time for both mAbs and reached a plateau after approximately 20–30 s, which can therefore be considered the characteristic time scale for viscoelastic layer formation under these conditions. At the SO interface, mAb2 showed significantly less deformation (−30%) compared to mAb1. Essentially, no layer formation was detected for mAb2 at the OA–water interface, while mAb1 formed a rigid layer whose strength showed a plateau after a similar time scale as with SO ( b,c). We confirmed these results by spiking the samples with Alexa Fluor 647-labeled antibodies at a 1:700 ratio of labeled to unlabeled mAb and observing their on-chip adsorption at the OA–water interface. In agreement with the deformation data, OA droplets in the presence of mAb1 showed a bright fluorescence rim, whereas no adsorption of mAb2 could be detected ( d). Moreover, droplets formed in the presence of mAb2 underwent rapid coalescence events off-chip, further demonstrating negligible or no adsorption ( Movie S2 ). Next, we assessed whether the deformation data could correlate with particle formation under conditions close to those of product formulations. To this aim, we applied mechanical perturbation using a stress testing which simulates conditions close to drug administration by syringes. Specifically, OA droplets formed in mAb1 and mAb2 formulations were subjected to pumping cycles using pharmaceutical plastic syringes. Consistent with the droplet deformation data and the fluorescence signal measured on-chip, OA droplets formed in the mAb1 formulation shed proteinaceous particles into solution upon syringe pumping, while no particles were observed in the mAb2 formulation ( Figure S7 ). These data further agree with the previously reported superior stability of mAb2 at air–water and different solid–liquid interfaces. At pH 6.4, mAb2 has a lower net charge compared to that of mAb1, which, in combination with its significantly lower dipole moment, may reduce adsorption at the polar and negatively charged OA interface. Here, stability differences within antibodies may be more pronounced than at hydrophobic interfaces, such as air and SO–water, since proteins can populate more native conformations compared to non-native structures generated at hydrophobic interfaces. Protein surface properties are therefore important to modulate interactions at polar interfaces, including electrostatic and dipole–dipole interactions, but may have smaller effects on non-native, hydrophobic interactions at nonpolar interfaces. The results illustrate that the destabilizing effect of OA is dependent on the specific mAb, and that care should be taken with antibodies that exhibit high zeta potentials and dipole moments in formulations at pH values where OA is expected to bear a net negative charge. Interestingly, the interfacial stability of the two mAbs differs despite their similar bulk stability behavior assessed by k D and T m . The Formation of a Viscoelastic Layer around OA Droplets is Modulated by pH and PS80 We then investigated the influence of pH and intact PS80 concentration on the kinetics and extent of the protein layer formation. The deformation of droplets increased with residence time and reached an apparent plateau after approximately 30 s ( a,b) at both pH 5.0 and 6.4. The deformation plateau value of mAb1 at pH 6.4 was higher than at pH 5, indicating that adsorbed proteins formed a slightly stronger viscoelastic layer when formed at the partially ionized OA interface, which may be caused by a denser packing of proteins. , At pH 5, OA is expected to be fully protonated in contrast to pH 6.4, where a degree of dissociation of about 20% was computed using the Henderson–Hasselbalch relationship ( Figure S5 ). For the same mAb and buffer system, PS80 prevented the formation of a viscoelastic layer at the nonpolar SO interface at concentrations above its critical micelle concentration (cmc). In the presence of 0.01% PS80, a concentration 6-fold higher than its cmc (0.0017% ), the formation of a protein layer at the OA interface was only partially prevented at pH 6.4 ( a), whereas complete protection was observed at pH 5.0 ( b). A PS80 concentration of 0.05% (30-fold higher than cmc) was required to completely prevent the formation of a viscoelastic protein layer at pH 6.4. We note that in commercial antibody products, PS80 is formulated at concentrations ranging from 0.6- to 120-fold its cmc, with the majority equal or below 30-fold. The higher concentration of PS80 required to compete with antibody adsorption at the OA–water interface compared to the SO–water interface can be explained by the higher polarity, which leads to weaker interactions. Additionally, oils with increased polarity can interact more strongly with water via H-bonds and polar−π interactions. These attractive interactions result in a competition between the oil and surfactant molecule for interfacial adsorption, leading to a reduction in the maximum interfacial concentrations of the surfactant. At pH 6.4, partially ionized OA bears a net negative charge and exhibits increased polarity compared to pH 5, causing strong ion–dipole interactions between the ionized and protonated carboxyl groups on the OA interface with each other ( b) and surrounding water molecules. , Thus, the affinity of the intact PS80 surfactant for the polar OA interface may be reduced, allowing antibody molecules to compete more effectively for interfacial adsorption. Consequently, mAbs form a viscoelastic layer, although with lower strength compared to a surfactant-free system, likely due to the presence of coadsorbed, intercalating PS80. , We also tested intermediate PS80 concentrations between 0.01 and 0.05% at pH 6.4, observing a monotonic decrease of the plateau value with increasing PS80 concentrations ( c). Interestingly, for 0.01, 0.02, and 0.03% PS80 concentrations, the droplet deformation profiles exhibited an apparent lag phase during the first 10 s ( a and S8 ). This lag phase can be explained by considering the faster adsorption of PS80 compared to the mAb. The mAb is initially excluded from the interface as a result of the kinetic competition with the surfactant. Assuming that antibody adsorption is essentially irreversible and that surfactant adsorption is reversible and characterized by a high desorption rate constant, the antibody can subsequently accumulate at the interface and form a viscoelastic layer. This mechanism is supported by a model based on Langmuir adsorption which describes the experimental data ( a, S9–10, eqs S4–S6, and Table S1 ). l -Arg Prevents Antibody Adsorption and Viscoelastic Layer Formation In the previous section, we showed that, under certain buffer conditions and surfactant concentrations, intact PS80 alone may not be sufficient to protect antibodies at the OA–water interface. We therefore analyzed the effect of other excipients commonly present in the antibody formulations. The addition of 130 mM sodium chloride (NaCl) to a surfactant-free formulation decreased the strength of the viscoelastic layer but could not suppress its formation ( a). Even a 2-fold increase in the concentration (260 mM NaCl) led only to a further marginal decrease in the deformation plateau ( Figure S11a ). These results suggest that shielding of attractive electrostatic interactions between the interface and mAb1 alone may not be sufficient to prevent protein adsorption and layer formation. We next investigated the effect of l -arginine ( l -Arg), which is widely used in therapeutic antibody formulations due to its favorable effects on protein bulk properties, such as reduction of solution viscosity and aggregation. − However, to the best of our knowledge, the effect of l -Arg on the antibody interfacial stability at high concentrations has not been investigated. l -Arg at pH 6.4 exhibits various features, including a positive net charge that can modulate electrostatic interactions, as well as a deprotonated carboxyl group, a protonated amino group, and a guanidinium group, which allow the formation of intermolecular H-bonds and interactions with aromatic side chains. We tested the effect of l -Arg at concentrations between 13 and 130 mM, which correspond to the typical range in approved antibody products. The time-dependent deformation values in the presence of 130 mM l -Arg at pH 6.4 in the absence of PS80 are shown in a, while the plateau values at the end of the device as a function of l -Arg concentration are shown in b (see Figure S12 for full data sets). Increasing the concentration of l -Arg led to a gradual decrease of the strength of the interfacial layer, whose formation was completely inhibited at 130 mM. Interestingly, the formation of the layer was not completely inhibited when the formulation was supplemented with either 130 mM guanidine hydrochloride (GdnHCl), l -lysine ( l -Lys) or a 1:1 mixture of the two each at 130 mM, suggesting that the combination of a positively charged amino group and guanidinium group in the same molecule, as present in l -Arg, is key to effectively prevent protein destabilization at this interface ( Figure S11b ). Analysis of samples spiked with mAb1-Alexa Fluor 647 by fluorescence microscopy showed that the addition of 130 mM l -Arg prevented the formation of a fluorescent protein rim ( c,d). These results indicate that l -Arg avoids antibody adsorption, with possible mechanisms including the shielding of attractive electrostatic interactions, the modulation of H-bonds, and the interaction with exposed hydrophobic groups of the antibodies. Given the observed effect of l -Arg, we analyzed the impact of 25 mM l -His, a commonly used buffer with a partially protonated imidazole side chain. We observed the same results obtained with sodium phosphate buffer ( Figure S13 ). Moreover, an increase in the concentration of l -His to 130 mM decreased the strength of the viscoelastic layer strength to an extent similar to NaCl and l -Lys ( Figure S11 ), indicating the absence of a protective effect of l -His in addition to electrostatics. l -Arg Mitigates Protein Particle Formation at OA–Water Interfaces in High Antibody Concentration Formulations The experiments discussed in the previous sections were performed at 30 mg mL –1 . To analyze the effect of OA in conditions that are closer to formulations for subcutaneous administration, we measured the stability of the two antibodies at 180 mg mL –1 concentration at pH 6.4 in the absence of PS80. For mAb1, we observed progressive buckling of the viscoelastic protein layer and the shedding of particles into solution around OA droplets when traveling through the compression and expansion zones of the chip ( a and Movie S3 ). Moreover, even after 1 h of incubation, the OA droplets collected at the end of the chip did not relax back to a spherical shape and buckles remained at their interface, indicating the irreversible formation and collapse of the protein layer into solid-like particles. In contrast, for mAb2, OA droplets exhibited much less buckling on-chip and fully relaxed to a spherical shape off-chip ( b), showing no collapse of the adsorbed protein layer. The difference between the two antibodies at a high concentration is therefore consistent with the experiments performed at a lower concentration of 30 mg mL –1 (Figures b–d and S7 ). The addition of 0.01% PS80 delayed but did not suppress the onset of buckling of the protein layer at the OA–water interface ( c). Buckling was still observed even at 0.05% PS80 at 180 mg mL –1 ( Figure S14a ) antibody concentration, in contrast to the results previously obtained at 30 mg mL –1 ( a). These observations indicate that at the higher protein concentration, the surfactant was only partially able to compete with the antibody for adsorption at the OA–water interface. This behavior was qualitatively predicted by the Langmuir model ( eqs S4–S6 and Figure S10c,d ), showing that a 6-fold increase in antibody concentration results in a rapid decay of surfactant coverage. However, consistent with the same experiment with 30 mg mL –1 mAb1, the presence of 130 mM l -Arg, in both the absence and presence of 0.01% PS80 ( d and S14b ), effectively prevented buckling and particle formation at 180 mg mL –1 mAb1 concentration. Conclusions Oleic acid (OA) is the primary free fatty acid resulting from the enzymatic hydrolysis of PS80 in pharmaceutical formulations. . Here, we applied a droplet microfluidic device to investigate, for the first time, the interactions between therapeutic antibodies at concentrations up to 180 mg mL –1 and the water–oleic acid (OA) interface. In analogy to the silicone oil–water interface, we showed that antibody adsorption at the OA–water interface can lead to the formation of a viscoelastic protein layer on a time scale of seconds, which precedes particle formation upon mechanical perturbations such as syringe pumping. We further demonstrated that two antibodies with similar bulk stability behavior but different zeta potentials and dipole moments exhibit different interfacial stability, underlining the role of protein surface physicochemical properties in modulating interactions with polar liquid interfaces. We illustrated that the ability of intact PS80 to protect antibodies from the OA–water interface depends on the pH and concentration of antibodies, which compete for interfacial adsorption. Importantly, the presence of l -Arg at a concentration typical of approved mAb products completely prevented the formation of the viscoelastic layer and detectable particles, even at high antibody concentrations. This work demonstrates the ability of l -Arg to protect therapeutic proteins at polar, negatively charged liquid interfaces such as OA, in addition to its other effects on antibody bulk properties, such as viscosity. Overall, our findings indicate that OA droplets generated in antibody products undergoing enzymatic hydrolysis can lead to the formation of protein particles via an interface-mediated pathway. Depending on the physicochemical properties and concentration of the protein and buffer constituents, intact PS80 alone might not be sufficient to protect against protein aggregation. Additional mitigation strategies may include the optimization of protein physicochemical properties, pH, and the addition of l -Arg. Droplet microfluidic platforms, potentially in combination with machine learning methods, can assist in this optimization and enable mechanistic insights into the stabilizing properties of various excipients under different formulation conditions, even at high protein concentrations, using low sample volumes in the microliter range. |
A Multicenter Study of Factors Related to Early Implant Failures—Part 1: Implant Materials and Surgical Techniques | 35a801f8-4328-4630-9862-beb597821587 | 11840881 | Surgical Procedures, Operative[mh] | Introduction For more than 30 years, dental implants have successfully been used to rehabilitate partially or completely edentulous patients . In recent decades, the development and research on dental implants have improved the treatment options and contributed to the macro and micro designs currently used . Initially, the bulk material used by implant dentistry was commercially pure titanium (CP Ti) . Although CP Ti Grade 4 is still the most widely used implant material, newer materials such as titanium zirconium alloys (TiZr) have attracted interest. Increased material strength has enabled the production of narrow dental implants made of TiZr . Yet, more long‐term follow‐ups and studies with larger patient groups are required to scientifically confirm the clinical performance of this alloy . The threaded macro design has evolved to be adapted to different clinical conditions and contribute to the primary stability . The first phase of stabilization, achieved through mechanical insertion of the implant into the bone, is influenced by clinical‐ and implant‐related factors . Surgical techniques and implant designs are based on the assessment of patent‐related factors such as bone quality and quantity . For example, tapered implants achieve a higher primary stability in bone with low quality . Moreover, deeper threads, smaller pitch, more threads, and micro threads as well as longer and wider implants, contribute to stability in poor‐quality bone by increasing bone‐to‐implant contact . The primary stability phase is followed by the biological stabilization phase, which is influenced by the microstructure of the implant surface. Similar modification and coating techniques for CP Ti implants have also been applied to newer implant materials. However, each modification leads to changes at the micrometer and nanometer levels, which could impact clinical outcomes . Most implant failures during the first year are related to loss of osseointegration . The osseointegration process is an immunological reaction to implanted material . The bone healing mechanism is influenced by implant characteristics, surgery, and patient‐related factors. Several clinical factors have been suggested to increase the risk for early implant failures and complications, such as reduced bone volume, smoking habits, healing complications, implant jaw placement, poor primary stability, surgical trauma, and infections . Moreover, patients' history of periodontitis and reduced ridge dimensions increase the risk for biological complications and early implant failures . Few studies have examined how the use of different implant materials, designs, and surgical techniques has changed. Contemporary consensus on surgical technique and the availably of different implant materials and designs may be evident when comparing different years when major engineering and research developments have taken place. Clinical studies with large patient cohorts are needed to improve the understanding of early implant complications and failures related to implants and surgical techniques. To this end, the aim of this study is to investigate the clinical use of dental implant materials, macro and micro designs, and surgical techniques in relation to early implant complications and failures in two patient cohorts—that is, patients treated in 2007 and in 2017. Materials and Methods This retrospective study is based on implant surgeries made in the region of Västra Götaland, Sweden. Patient records were investigated for the two cohorts and included all implants inserted at three specialist centers. 2.1 Patient Inclusion A search in the digital dental record system T4 (Carestream Dental AB, Stockholm, Sweden) was conducted to identify patients who had received implant surgery in 2007 and 2017 at three specialist centers in Sweden. The patients were found using charge codes in the patient record system. These codes are for implant surgeries used by the Swedish general insurance system. Patients with partial and complete edentulism were included. 2.2 Data Collection Data were retrieved from the dental record system T4 as well as from digital and analog registries on implant surgeries. In 2007, the surgery reports were archived as paper journals at the clinics and in the regional archive. In 2017, the surgery reports were scanned into the records as files. The information concerns only the surgical part of the treatment. Following surgery, patients were referred back to their dentist for continued prosthetic rehabilitation. The following data were collected from surgical reports and journals from both 2007 and 2017: anamnestic information, assessment of bone volume and bone quality, use of preoperative antibiotics, implant jaw placement, implant surgery procedure including submerged or non‐submerged surgery, primary stability, bone augmentation, sinus lifts, sinus membrane perforations, bone perforations and exposed threads, implant specifications for material and design, and early implant complications and failures. 2.3 Definition Early implant failures and complications that occurred immediately after insertion or within the first year were included. 2.4 Statistics Statistical data analysis was performed in SPSS (IBM SPSS Statistics for Windows, Version 28.0. Armonk, NY: IBM Corp). Descriptive statistics was used for numbers, means, and frequencies. Fisher's exact test and Fisher's permutations with significance level of p < 0.05 were used to compare the outcome of the two cohorts. The main data analysis for implant complications and failures was done with a multivariable logistic regression model with significance level of p < 0.05. In the univariable analysis, all variables were compared with significance level of p < 0.2 for further variable selection and inclusion in the multivariable logistic regression model. Early implant failures and complications were classified as the dependent variable; all other data were classified as the independent variables. The multivariable logistic regression analyses were done at the patient, jaw, and implant levels. For each level, the following variables were analyzed: age, gender, diseases, allergies, smoking, bone quality, bone volume, primary stability, preoperative antibiotics, number of implants per patient, early implant failure, early implant complications, bone augmentation, bone perforation and exposed threads, sinus membrane perforation, sinus lift, implant length, implant diameter and implants placed in the maxillae vs. mandible. In addition, implant‐specific variables were analyzed at the implant level: submerged or non‐submerged surgery, implant position (incisive/canine, premolar or molar region), implant materials, implant manufacturer, and one specific brand due to its special character. The presence or absence of general diseases, allergies, and smoking habits was statistically analyzed as binary outcomes (yes/no) as well as the incidence of early implant failure, early implant complications, exposed threads, sinus membrane perforation, and bone augmentation (event/no event). The variables primary stability, bone quality, and bone volume were reported and analyzed on a three‐, four‐, or five‐step ordinal scale. At the patient and jaw level, data on implant length, diameter, primary stability, bone quality, and bone volume were analyzed as mean values for patients with multiple implants. At the patient level, implant jaw placement in both maxillae and mandible was calculated as a ratio of number of implants placed in the maxillae divided by the number of implants per patient. 2.5 Ethical Protection The study was approved by the Ethical Review Board, Sweden (2019‐01330). Patient Inclusion A search in the digital dental record system T4 (Carestream Dental AB, Stockholm, Sweden) was conducted to identify patients who had received implant surgery in 2007 and 2017 at three specialist centers in Sweden. The patients were found using charge codes in the patient record system. These codes are for implant surgeries used by the Swedish general insurance system. Patients with partial and complete edentulism were included. Data Collection Data were retrieved from the dental record system T4 as well as from digital and analog registries on implant surgeries. In 2007, the surgery reports were archived as paper journals at the clinics and in the regional archive. In 2017, the surgery reports were scanned into the records as files. The information concerns only the surgical part of the treatment. Following surgery, patients were referred back to their dentist for continued prosthetic rehabilitation. The following data were collected from surgical reports and journals from both 2007 and 2017: anamnestic information, assessment of bone volume and bone quality, use of preoperative antibiotics, implant jaw placement, implant surgery procedure including submerged or non‐submerged surgery, primary stability, bone augmentation, sinus lifts, sinus membrane perforations, bone perforations and exposed threads, implant specifications for material and design, and early implant complications and failures. Definition Early implant failures and complications that occurred immediately after insertion or within the first year were included. Statistics Statistical data analysis was performed in SPSS (IBM SPSS Statistics for Windows, Version 28.0. Armonk, NY: IBM Corp). Descriptive statistics was used for numbers, means, and frequencies. Fisher's exact test and Fisher's permutations with significance level of p < 0.05 were used to compare the outcome of the two cohorts. The main data analysis for implant complications and failures was done with a multivariable logistic regression model with significance level of p < 0.05. In the univariable analysis, all variables were compared with significance level of p < 0.2 for further variable selection and inclusion in the multivariable logistic regression model. Early implant failures and complications were classified as the dependent variable; all other data were classified as the independent variables. The multivariable logistic regression analyses were done at the patient, jaw, and implant levels. For each level, the following variables were analyzed: age, gender, diseases, allergies, smoking, bone quality, bone volume, primary stability, preoperative antibiotics, number of implants per patient, early implant failure, early implant complications, bone augmentation, bone perforation and exposed threads, sinus membrane perforation, sinus lift, implant length, implant diameter and implants placed in the maxillae vs. mandible. In addition, implant‐specific variables were analyzed at the implant level: submerged or non‐submerged surgery, implant position (incisive/canine, premolar or molar region), implant materials, implant manufacturer, and one specific brand due to its special character. The presence or absence of general diseases, allergies, and smoking habits was statistically analyzed as binary outcomes (yes/no) as well as the incidence of early implant failure, early implant complications, exposed threads, sinus membrane perforation, and bone augmentation (event/no event). The variables primary stability, bone quality, and bone volume were reported and analyzed on a three‐, four‐, or five‐step ordinal scale. At the patient and jaw level, data on implant length, diameter, primary stability, bone quality, and bone volume were analyzed as mean values for patients with multiple implants. At the patient level, implant jaw placement in both maxillae and mandible was calculated as a ratio of number of implants placed in the maxillae divided by the number of implants per patient. Ethical Protection The study was approved by the Ethical Review Board, Sweden (2019‐01330). Results 3.1 Patient Inclusion and Exclusion A total of 1949 patients were registered as having received implants: 862 patients from 2007 and 1087 patients from 2017. In total, 74 (3.8%) patients were excluded from the study—63 patients from 2007 and 11 patients from 2017. As the surgical reports were not found for 52 patients from 2007 and 8 patients from 2017, these patients were excluded. Eleven patients from 2007 and three patients from 2017 had their implants inserted the previous year (i.e., 2006 and 2016, respectively) but were charged in 2007 and 2017, respectively. Therefore, these patients were excluded. Two zygoma implants were excluded from 2007 due to deviant design and surgical technique. 3.2 Patient and Implant Data The implant treatment was performed at three specialist clinics for maxillofacial surgery (Table ). In 2007, 799 patients—443 (55.4%) women and 356 (45.6%) men—with 2473 implants were included. In 2017, 1076 patients—596 (55.4%) women and 480 (44.6%) men—with 2287 implants were included. There was no statistically significant difference between the cohorts regarding gender distribution ( p > 0.30). The mean age was 55.7 years (SD 22.1) for the 2007 cohort and 54.0 years (SD 18.2) for the 2017 cohort ( p = 0.068). The number of implants per patient was significantly lower in the 2017 cohort: 2.1 implants per patient in 2017 and 3.1 implants in 2007 ( p < 0.001). The proportion of implants placed in the maxillae and mandible was similar ( p > 0.30) for the two cohorts. Some implants were recorded as re‐entry operations: 13 (0.5%) implants in 2007 and 24 (1.0%) implants in 2017. 3.3 Surgical Aspects Statistically significantly fewer patients were given a single dose of preoperative antibiotics in 2017. Cortical bone plate perforations at implant surgery and exposed threads were significantly more prevalent in 2017 at both the patient and implant levels. More bone augmentations were performed before or in conjunction with implant surgery in 2017 at the implant level. In 2017, fewer submerged implants and more non‐submerged implants were performed (Table ). Bone quality and quantity were reported according to the Lekholm and Zarb classification system , and primary stability was reported on a three‐step ordinal scale—good, moderate, or poor. These variables were not fully reported at all clinics. At the implant level, bone quality was more frequently reported as value 1 and less as value 2–4 in 2017, which resulted in lower mean bone quality value. There was no statistically significant difference between the cohorts for the mean bone volume. Moderate primary stability was more frequently reported in the 2017 cohort, which resulted in a higher mean primary stability value than in the 2007 cohort (Table ). 3.4 Implant Materials and Micro and Macro Design In 2007 and 2017, the most used bulk material was CP Ti Grade 4. In 2007, CP Ti Grades 1–4 were used; in 2017, CP Ti Grade 4 and TiZr were used (Table ). Implants with moderately rough surfaces ( S a 1–2 μm) were used in 2007 ( n = 2408, 97.4%) and 2017 ( n = 2287, 100.0%). In 2007, machined minimally rough surfaces ( S a 0.5–1.0 μm) were still used ( n = 65, 2.6%). Seven implant manufacturers were used in 2007 and four manufacturers were used in 2017. More implants with external abutment connections were used in 2007 than in 2017. In 2017, internal abutment connections were the most often used connection. In 2007 and 2017, bone‐level design was used in a majority of patients; however, in 2017, more implants with the soft tissue‐level design were used (Table ). Implants with straight or conical macro designs with varying tapers were available. In 2017, tapered implants with a deeper thread design were used. There was a new implant brand introduced (NobelActive Nobel Biocare, Gothenburg, Sweden), unlike the other tapered implants with a widely spaced, expanding double‐threaded design, with drilling blades at apex (Table ). The implant length decreased in 2017 compared to 2007, but the implant diameter increased in 2017 compared to 2007 (Table ). The portion of short implants (≤ 8 mm) was 128 (5.2%) in 2007 and 254 (11.1%) in 2017. Narrow dental implants (≤ 3.3 mm) were used in both years: 324 (13.1%) in 2007 and 287 (12.5%) in 2017. In 2017, most of these implants were made of TiZr ( n = 191, 63.1%). 3.5 First‐Year Implant Loss In 2007, 23 (2.9%) patients lost one or more implants; in 2017, 40 (3.7%) patients lost one or more implants ( p > 0.30). At the implant level, significantly more implants were lost in 2017 than in 2007 ( p < 0.001). In 2007, 26 (1.1%) of the implants were recorded with implant failure compared to 56 failed (2.4%) implants in 2017. In 2007 and 2017, most implants were lost during the first 6 months—19 (73.1%) and 41 (73.2%), respectively. Implants were mainly lost due to biological reasons and were reported as infections, osseointegration loss, major bone loss, or a combination of these factors. In 2007, two implants were lost due to technical reasons and were reported as fractures caused by external facial trauma. In some cases, the reason for implant failure was unknown (Table ). A multivariable logistic analysis showed two statistically significant variables that increased the risk for implant failure at the patient level: exposure of threads and number of implants per patient. When calculated at the jaw level, similar results were obtained. A multivariable logistic analysis showed 11 statistically significant variables that increased the risk for implant failure at the implant level (Table ). The variables sinus lifts, manufacturer D, manufacturer G, and implant material CP Ti Grade 2 were excluded from the multivariable logistic analysis of early implant failures, as no failures were reported for these variables. Implant manufacturer F was totally correlated with the material CP Ti Grade 1, as all implants of this material were manufactured by manufacturer F. 3.6 First‐Year Implant Complications Significantly more return visits for postoperative complications were recorded in 2017 at both the patient and implant level ( p < 0.001). In 2017, 145 (13.5%) of the patients experienced one or multiple complications compared to 56 (7.0%) of the patients in 2007. The reported complication rates at the implant level were 241 (10.5%) in 2017 and 94 (3.8%) in 2007. The reported complications for the 2007 and 2017 cohorts included implant mobility and bone loss at the marginal or peri‐implant level, postoperative infections, including pus, fistula, abscess, and granulation tissue and sometimes in combination with swelling and fever, bone necrosis and minor postoperative symptoms, including edematous tissue, tenderness, trismus, redness, minor swelling, and symptoms due to poor oral hygiene. Other surgical complications included removal of small bone fragments, mucosal penetrations, bone exposure, bone overgrowth over abutments, loss of healing abutments, complications after external facial trauma, and healing complications related to early implant overload (Table ). A multivariable regression analysis revealed five significant variables that increased the risk for complications at the patient level. When calculated at the jaw level, equivalent results were obtained. Nine significant variables indicated an increased risk for complications at the implant level (Table ). The variable manufacturers D and G were excluded from the multivariable logistic analysis of early implant complications, as no complications were reported for these implants. Complications and implant failure at the implant level were significantly related ( p < 0.001). Complications were reported before implant failure for 59 (72.0%) of the lost implants at a follow‐up visit at one of the specialist clinics. The most reported complications before implant failure were implant mobility and bone loss ( n = 44, 53.7%) and infection symptoms ( n = 26, 31.7%). Patient Inclusion and Exclusion A total of 1949 patients were registered as having received implants: 862 patients from 2007 and 1087 patients from 2017. In total, 74 (3.8%) patients were excluded from the study—63 patients from 2007 and 11 patients from 2017. As the surgical reports were not found for 52 patients from 2007 and 8 patients from 2017, these patients were excluded. Eleven patients from 2007 and three patients from 2017 had their implants inserted the previous year (i.e., 2006 and 2016, respectively) but were charged in 2007 and 2017, respectively. Therefore, these patients were excluded. Two zygoma implants were excluded from 2007 due to deviant design and surgical technique. Patient and Implant Data The implant treatment was performed at three specialist clinics for maxillofacial surgery (Table ). In 2007, 799 patients—443 (55.4%) women and 356 (45.6%) men—with 2473 implants were included. In 2017, 1076 patients—596 (55.4%) women and 480 (44.6%) men—with 2287 implants were included. There was no statistically significant difference between the cohorts regarding gender distribution ( p > 0.30). The mean age was 55.7 years (SD 22.1) for the 2007 cohort and 54.0 years (SD 18.2) for the 2017 cohort ( p = 0.068). The number of implants per patient was significantly lower in the 2017 cohort: 2.1 implants per patient in 2017 and 3.1 implants in 2007 ( p < 0.001). The proportion of implants placed in the maxillae and mandible was similar ( p > 0.30) for the two cohorts. Some implants were recorded as re‐entry operations: 13 (0.5%) implants in 2007 and 24 (1.0%) implants in 2017. Surgical Aspects Statistically significantly fewer patients were given a single dose of preoperative antibiotics in 2017. Cortical bone plate perforations at implant surgery and exposed threads were significantly more prevalent in 2017 at both the patient and implant levels. More bone augmentations were performed before or in conjunction with implant surgery in 2017 at the implant level. In 2017, fewer submerged implants and more non‐submerged implants were performed (Table ). Bone quality and quantity were reported according to the Lekholm and Zarb classification system , and primary stability was reported on a three‐step ordinal scale—good, moderate, or poor. These variables were not fully reported at all clinics. At the implant level, bone quality was more frequently reported as value 1 and less as value 2–4 in 2017, which resulted in lower mean bone quality value. There was no statistically significant difference between the cohorts for the mean bone volume. Moderate primary stability was more frequently reported in the 2017 cohort, which resulted in a higher mean primary stability value than in the 2007 cohort (Table ). Implant Materials and Micro and Macro Design In 2007 and 2017, the most used bulk material was CP Ti Grade 4. In 2007, CP Ti Grades 1–4 were used; in 2017, CP Ti Grade 4 and TiZr were used (Table ). Implants with moderately rough surfaces ( S a 1–2 μm) were used in 2007 ( n = 2408, 97.4%) and 2017 ( n = 2287, 100.0%). In 2007, machined minimally rough surfaces ( S a 0.5–1.0 μm) were still used ( n = 65, 2.6%). Seven implant manufacturers were used in 2007 and four manufacturers were used in 2017. More implants with external abutment connections were used in 2007 than in 2017. In 2017, internal abutment connections were the most often used connection. In 2007 and 2017, bone‐level design was used in a majority of patients; however, in 2017, more implants with the soft tissue‐level design were used (Table ). Implants with straight or conical macro designs with varying tapers were available. In 2017, tapered implants with a deeper thread design were used. There was a new implant brand introduced (NobelActive Nobel Biocare, Gothenburg, Sweden), unlike the other tapered implants with a widely spaced, expanding double‐threaded design, with drilling blades at apex (Table ). The implant length decreased in 2017 compared to 2007, but the implant diameter increased in 2017 compared to 2007 (Table ). The portion of short implants (≤ 8 mm) was 128 (5.2%) in 2007 and 254 (11.1%) in 2017. Narrow dental implants (≤ 3.3 mm) were used in both years: 324 (13.1%) in 2007 and 287 (12.5%) in 2017. In 2017, most of these implants were made of TiZr ( n = 191, 63.1%). First‐Year Implant Loss In 2007, 23 (2.9%) patients lost one or more implants; in 2017, 40 (3.7%) patients lost one or more implants ( p > 0.30). At the implant level, significantly more implants were lost in 2017 than in 2007 ( p < 0.001). In 2007, 26 (1.1%) of the implants were recorded with implant failure compared to 56 failed (2.4%) implants in 2017. In 2007 and 2017, most implants were lost during the first 6 months—19 (73.1%) and 41 (73.2%), respectively. Implants were mainly lost due to biological reasons and were reported as infections, osseointegration loss, major bone loss, or a combination of these factors. In 2007, two implants were lost due to technical reasons and were reported as fractures caused by external facial trauma. In some cases, the reason for implant failure was unknown (Table ). A multivariable logistic analysis showed two statistically significant variables that increased the risk for implant failure at the patient level: exposure of threads and number of implants per patient. When calculated at the jaw level, similar results were obtained. A multivariable logistic analysis showed 11 statistically significant variables that increased the risk for implant failure at the implant level (Table ). The variables sinus lifts, manufacturer D, manufacturer G, and implant material CP Ti Grade 2 were excluded from the multivariable logistic analysis of early implant failures, as no failures were reported for these variables. Implant manufacturer F was totally correlated with the material CP Ti Grade 1, as all implants of this material were manufactured by manufacturer F. First‐Year Implant Complications Significantly more return visits for postoperative complications were recorded in 2017 at both the patient and implant level ( p < 0.001). In 2017, 145 (13.5%) of the patients experienced one or multiple complications compared to 56 (7.0%) of the patients in 2007. The reported complication rates at the implant level were 241 (10.5%) in 2017 and 94 (3.8%) in 2007. The reported complications for the 2007 and 2017 cohorts included implant mobility and bone loss at the marginal or peri‐implant level, postoperative infections, including pus, fistula, abscess, and granulation tissue and sometimes in combination with swelling and fever, bone necrosis and minor postoperative symptoms, including edematous tissue, tenderness, trismus, redness, minor swelling, and symptoms due to poor oral hygiene. Other surgical complications included removal of small bone fragments, mucosal penetrations, bone exposure, bone overgrowth over abutments, loss of healing abutments, complications after external facial trauma, and healing complications related to early implant overload (Table ). A multivariable regression analysis revealed five significant variables that increased the risk for complications at the patient level. When calculated at the jaw level, equivalent results were obtained. Nine significant variables indicated an increased risk for complications at the implant level (Table ). The variable manufacturers D and G were excluded from the multivariable logistic analysis of early implant complications, as no complications were reported for these implants. Complications and implant failure at the implant level were significantly related ( p < 0.001). Complications were reported before implant failure for 59 (72.0%) of the lost implants at a follow‐up visit at one of the specialist clinics. The most reported complications before implant failure were implant mobility and bone loss ( n = 44, 53.7%) and infection symptoms ( n = 26, 31.7%). Discussion The results confirm that major changes were made in dental implant treatment protocol between 2007 and 2017. Comparing these years, several variables regarding surgical technique and implant material and design were shown to be statistically significant. Some of these differences were related to early implant failures and complications. Expanded inclusion criteria for patients receiving implants are assumed to have a substantial impact on the outcome. The analyses were performed at the patient, implant, and jaw level. All the variables were dependent on patient‐related factors and natural biological variations. From a patient perspective, analyses at the patient level may be the most interesting as the outcome of the treatment as a whole is considered rather than the outcome of each individual implant. However, because bone status may differ between the posterior and frontal region as well as between the mandible and maxillae , some variables can be analyzed at the implant or jaw level. A slight shift towards the use of the new material (i.e., TiZr) was seen in 2017. TiZr was mainly used for narrow dental implants (≤ 3.3 mm). This finding suggests that TiZr implants increase the risk of complications. However, TiZr implants in narrower diameters can be used for implant sites with small gaps or less bone volume , which may result in more complications. For both the 2007 and the 2017 cohorts, implants of titanium Grade 4 were used the most, but CP Ti Grades 1–4 implants were used in 2007. Lower titanium grades were replaced by higher titanium grades because fractures were observed in implants made of CP Ti Grades 1 and 2 . In the present study, implants of CP Ti Grade 1 were shown to increase the risk for early implant failure, which may be related to the microstructure of these implants. In 2017, the original implants with minimally rough surfaces were replaced with implants with moderately rough surfaces. Clinical research confirms improved primary stability and bone apposition and a reduced risk of implant failure for implants with moderately rough surfaces ( S a 1–2 μm) . Moreover, in general, shorter and wider implants were used in 2017. The advantage of adapting the implant length to anatomical boundaries is to avoid complex surgery and biological complications . Implant length seems to support primary stability up to a certain length , but moderately rough surfaces, taper design, and wider diameters also increase the bone‐to‐implant contact area and may be more crucial for achieving primary stability in low‐density bone . Early implant failures are defined as failure to establish bone formation around the implants with fibrous tissue healing rather than bone healing . In this study, implant failure rates of 2.9% and 3.7% at the patient level, and 1.1% and 2.4% at the implant level were reported in 2007 and 2017, respectively. In previous studies, lower failure rates have been reported for implants lost during the initial healing period, before connection of abutment or supra construction, or up to the second stage of surgery . However, these studies include a shorter period of time than defined for early failures in this study. Derks et al. reported an early failure rate of 4.1% at the patient level and 1.4% at the implant level for randomized collected patients from a Swedish national register in 2003 . Nevertheless, different implant materials and designs have been used over time and a patient cohort selected from a national sample is different from a patient cohort treated at specialist clinics, making them not fully comparable. Still, there are other studies reporting a higher early failure rate . In a retrospective study of patients treated in 2010–2016 at a university clinical setting, a failure rate of 3.1% at the implant level was reported . In reference to this, it can be assumed that the variation of failure rates depends, among other things, on the definition of early implant failure, the selected patient cohort, the included time period, the used implants and whether the treatment is performed at specialist clinic or at a general dental clinic. This study found that several factors could impact implant survival. Previous studies have confirmed that perforations of the cortical bone plate and exposed threads are risk factors for complications. Dehiscence and fenestrations may increase the risk of compromised aesthetics, gingival retraction and periimplantitis . In this study, the number of implants placed per patient was also shown to be related to early implant failures, which agrees with results from previous studies . Implant surgery of multiple implants may result in longer treatment time, increased contamination, and reduced blood supply . Moreover, the higher the number of implants, the higher the hazard rate for implant complications and failures . Still, loss of teeth is related to the biological process of osteoclastic activation and bone alteration . Jaw atrophy may be a consequence of multiple tooth loss over time . Minor bone volume at the implant insertion site is related to a higher implant failure rate . Hence, bone resorption could be an underlying risk factor for implant failure related to number of implants inserted. Extended inclusion criteria for patients eligible for implant treatment planning may result in implant insertions in compromised implant sites. The results of the variables bone quality, bone volume, and primary stability should be interpreted with caution as these variables were not fully reported by all clinics, and data are missing. However, the use of short implants (≤ 8 mm) and the need for augmented bone may indicate implant insertions in sites with bone resorption or generally less bone volume . Moreover, implants with pronounced tapers and deeper thread designs for use in compromised bone quality were used in 2017. Implants from manufacturer A were most frequently used in both years, which led to a greater exposure and statistically higher risk for implant complications and failures. Nevertheless, one specific implant brand from the same manufacturer used in 2017 had a significantly higher risk for implant loss (NobelActive). These implants, with expanding conical body and double‐thread design, may generate high primary stability even in low‐quality bone . However, implants placed in poor bone sites are more prone to fail . Bone status is a critical factor for implant success and therefore highly influences the choice of implant. Furthermore, bone status is probably a more decisive factor for the risk of implant failure than the implant material and macro design per se . Perforation of the maxillary sinus membrane also increases the risk of early implant failure and complications. However, results based on a limited number of cases and characterized by large confidence intervals should be interpreted with caution. The cause of implant failure is complex with several factors interacting. Limited bone height could be considered as an aggravating condition that increases the risk for sinus perforations . Similar results have been observed in other studies . However, generally high implant survival rates are shown for implants inserted in augmented sinuses after sinus perforations . Yet, other studies have confirmed an increased risk for postoperative complications, including symptoms associated with infections and sinusitis after sinus perforations . A non‐submerged surgical technique was significantly more common in 2017 than in 2007, which reflects the trend toward an increased number of treatments in partially edentulous patients with implant‐supported prostheses . A non‐submerged surgery technique may be preferred as it shortens the treatment time, reduces costs, and improves the convivence of the patient . However, non‐submerged surgery was shown to give significantly higher risk for implant failure. Most studies have found that both techniques could be used depending on the clinical condition , but, as in previously published studies, the present study's results suggest that patients at risk of implant failure may benefit from submerged healing, which helps prevent functional overloading . Submerged surgery, on the other hand, was shown to generate more postoperative complications, which may be a natural consequence of a two‐stage wound healing. The consensus on the routine use of preoperative antibiotics has evolved. Less preoperative antibiotics were given in 2017 than in 2007. This difference is possibly an effect of the treatment strategies for rational antibiotic use and reduced antibiotic resistance (STRAMA) adopted as a global action plan by the World Health Assembly in 2015 . Yet, the result from the present study indicates that a single preoperative dose of antibiotics may decrease the risk of implant complications, a finding that agrees with previous studies . Still, the patient's medical history and health condition should be considered as well as the difficulty of the surgery procedure when discussing the indications of preoperative antibiotics . Moreover, male gender and smoking were also shown to increase the risk for early implant failure and complications. Several studies have concluded that smoking in the initial phase of implant insertion results in poorer wound healing and therefore poorer osseointegration . Male patients had a significantly higher risk for implant failure, a finding that agrees with previously published studies . However, the results in the literature diverge. For example, gender differences in smoking habits have been discussed as a confounding factor . The present cohort study has a high external validity as it represents a total patient inclusion with no patient selection and a low dropout rate. The results reflect the effectiveness of implant treatment in specialist clinics. A limitation of the study might be that the referred patients may not be representative of patients in general. Patients may also have sought dental care at other clinics for early postoperative complications and therefore the complication rate could be underestimated. According to the data in this study, more postoperative symptoms were reported in 2017. Since 2007, there has been an increased emphasis on patient safety and quality assurance in dentistry that may have led to a greater focus on the reporting and management of complications. Thus, the risk of underestimation is likely higher in 2007 than in 2017. A wide range of complications was observed in this study, with varying degrees of severity. A possible limitation is that the severity of complications was not assessed. However, the effect of postoperative symptoms should be interpreted with caution when evaluated retrospectively. Still, clinical risk factors of increased return visits were identified as all reported complications were included. Moreover, the retrospective study design may be a limitation, as the data are based on patient dental records written for medical documentation and not standardized for clinical research. Conclusions This study compares implant materials, designs, and surgical techniques used in 2007 and 2017. The analyses of early implant complications and failures are consistent with previous studies. Notable changes were observed in the cohorts. By 2017, the use of moderately rough surfaces had totally replaced the minimally rough surfaces and a new dental implant material, in particular TiZr, were introduced. In addition, in 2017, treatments were characterized by more non‐submerged surgeries, more bone augmentation procedures, less use of ordinated preoperative antibiotics, and tapered implants with variable thread‐design for soft bone indications. The increased use of these treatment options reflects less standardized implant protocols and more individualized treatment planning in 2017 than in 2007. The cause of significantly more frequent early implant complications and failures in 2017 than in 2007 may partly be explained by extended patient inclusion criteria for dental implant treatment. Observed differences related to increased risk for implant failure in 2017 were higher incidence of exposed implant threads and cortical bone perforations, more non‐submerged surgeries, increased use of shorter implants, and the use of one specific implant brand with sharp and tapered thread design. Many of these factors may indicate treatment performed in a compromised bone. There was an increased risk for complications in 2017 that might be related to less frequently ordinated preoperative antibiotics, more implants with exposed threads and cortical bone plate perforations, and frequent use of implants made of TiZr, most of them in narrow diameters to compensate for reduced bone volume. Further research is required to investigate the influence of other patient‐related factors, such as patient general health and clinical conditions, including implant jaw position in relation to bone status. Concept and design, data collection, planning of statistics, data analysis in collaboration with biostatistician, data interpretation, drafting of article, critical revision, and approval of the article: Rachel Duhan Wåhlberg. Concept and design, funding secured by scholarship from government research support in Public Dental Service Region Västra Götaland (TUA), planning of statistics, data interpretation, critical revision, and approval of the article: Victoria Franke Stenport. Concept and design, funding secured by scholarship from The Swedish Research Council, and critical revision: Ann Wennerberg. Concept and design, planning of statistics, data interpretation, critical revision, and approval of article: Lars Hjalmarsson. The authors declare no conflicts of interest. |
Physiotherapy practice in women’s health: awareness and attitudes of obstetricians and gynecologists in Ghana | 370201e9-f77c-4de4-9fea-d0b0b425d59f | 10714590 | Gynaecology[mh] | Physiotherapy plays a significant adjunctive role at all stages of healthcare in various medical specialties including women’s health . It is not a substitute but a complimentary intervention to other forms of clinical management to enhance efficiency and quality of medical care. Physiotherapists as part of the health care team play an important role in reducing hospital stay duration, recovery period and rehabilitation for a better quality of life . There is a common misconception that physiotherapy’s importance is limited to musculoskeletal conditions but the scope is wider and incorporates other specialty areas such as women’s health care . Physiotherapy in women’s health is pivotal in treating a wide variety of obstetric and gynecological issues . For instance, pregnancy is characterized by disturbing physiological changes (physical and emotional) and childbirth further compounds the stress. However, various physiotherapy interventions including breathing exercises and relaxation may be both preventive and therapeutic in women’s health especially during labour . Physiotherapists are efficient in managing some complications of pregnancy and childbirth such as pelvic floor dysfunction and low back pain via manual therapy, exercise and or electrotherapeutic modalities . Though physiotherapy is vital in maternal health, it is still not widely practiced in low and middle income countries and remains underutilized . Utilization of individual professional skills in the multidisciplinary approach depends on co-operation between healthcare team members and the extent to which they value the knowledge of one another . Previous studies have recorded low referral rate and poor utilization of physiotherapy by obstetricians and gynecologists and suboptimal knowledge concerning the preventive role of physiotherapy during antenatal and postnatal care has been implicated . Physicians, including obstetricians and gynecologists are at the ‘top of the pyramid’ of health professionals, and have profound influences on other health workers including physiotherapists, in terms of making the appropriate referrals . The issue of delayed involvement of physiotherapy by most physicians has been of great concern to physiotherapists . In order for patients to be referred to other members of the multidisciplinary team, health professionals need to understand each other’s role and contribution towards patients’ care . Though sufficient evidence exist in international literature concerning the role and effectiveness of physiotherapy in the practice of obstetrics and gynecology , considerable awareness of its scope remains limited in Ghana. More recently, an important initiative to actively integrate physiotherapy in urogynecology commenced at the largest tertiary hospital in Ghana and two young physiotherapists received formal training in pelvic floor rehabilitation . Optimal integration of physiotherapy services in maternal health by obstetricians/gynecologists depends on their knowledge on the specific conditions amenable to physiotherapy. In Ghana, there is limited research on the role of physiotherapy in women’s health. However, there is evidence that obstetric events contribute significantly to the burden of urinary incontinence and pelvic organ prolapse which require adjunctive physiotherapy. In addition, the obstetrician’s knowledge and awareness of the role of physiotherapy in Women’s health has not been reported. In a descriptive study of seven hospitals in Nigeria, Odunaiya et al. determined that the obstetricians and gynecologists demonstrated limited knowledge about specific conditions amenable to physiotherapy treatment although they had general knowledge concerning the role of physiotherapy in women’s health . The aim of this study was to evaluate the level of awareness and attitudes of obstetricians/gynecologists towards physiotherapy in women’s health and the factors influencing its utilization in Ghana. This study highlights significant clinical insights into the management of women’s health issues during the antenatal, intrapartum and postpartum periods and gynecological care in Ghana. Study design and site The study was a cross sectional study, conducted at the Department of Obstetrics and Gynecology at the Korle Bu Teaching Hospital (KBTH), in Accra, Ghana. KBTH is the largest tertiary Hospital in the southern part of Ghana, and an accredited training facility for both the West African and Ghana College of Physicians and Surgeons. It is currently the leading national referral center in Ghana comprising various medical specialty departments. The Department of Obstetrics and Gynecology of the hospital is divided into five units (Teams A, B, C, D and E) and has a bed capacity of 372 (97 and 275 beds for Gynecology and Obstetrics respectively . A senior consultant heads each unit with other consultants and doctors (comprising senior residents, junior residents and house officers) equally distributed among the various units. The consultants are usually permanent while the other doctors (mainly residents) rotate through the teams. Each unit has its specific clinic, theatre and grand ward round days. Study participants The participants for the study were medical doctors (consultants, senior residents and junior residents) in obstetrics and gynecology who were working at the hospital. A consultant obstetrician/gynecologist is a medical doctor of the highest rank who deals with women’s health problems relating to the female reproductive system. Senior and junior residents are medical doctors in residency training to become consultants and specialists respectively. At the time of the study, there were 17 consultants, 23 senior residents and about 55 junior residents in the Department of Obstetrics and Gynecology, KBTH. The inclusion criteria were obstetricians/gynecologists (consultants) and clinicians who were pursuing their residency training program and had worked at the Department of Obstetrics and Gynecology for at least one year. Specific exclusion criteria were failure to provide informed consent, house officers undertaking their rotations at the Department of Obstetrics and Gynecology and resident doctors who had spent less than 12 months into their residency training. Also, doctors who were on leave were excluded from the study. Data collection and variables Prior to the data collection, a formal protocol presentation was done at the Department of Obstetrics and Gynecology to all the clinical staff including the doctors, nurses and midwives during one of the their clinical meetings. Convenient sampling method was used in recruiting the study participants based on their accessibility, availability at the time of the study and willingness to participate. The KBTH was chosen as the study site because it is the largest residency training center for obstetrics and gynecology in Ghana and manages high number of obstetric and gynecological cases. Based on the nature of the study, the sampling procedure employed all the available and willing doctors (consultants, senior and residents ) working at the KBTH with reference to the inclusion and exclusion criteria. An “Awareness and Attitude Questionnaire” was adapted from a standardized questionnaire used in a previous work in the subregion . The questionnaire comprised three sections exploring socio-demographic characteristics, awareness of physiotherapy, attitude and factors influencing utilization of physiotherapy among obstetricians and gynecologists. The awareness of physiotherapy in obstetrics and gynecology had responses as: ‘yes’, ‘no’ and ‘not sure’ while the attitude scale was in Likert (‘strongly agree’, ‘agree’, ‘somewhat’, ‘strongly disagree’ and ‘disagree’). The study participants were given the questionnaire to complete during their regular morning meetings held in the conference room at the maternity unit from Monday to Friday. Some of the questionnaire were also distributed at the obstetric clinic, gynecology clinic and on their respective wards. Participation in the study was voluntary and participants were informed that they were free to withdraw from the study at any time. Return visits and contact follow up were used to collect the completed questionnaires from doctors who were not able to complete the questionnaire immediately. Figure indicates the flow chart for recruiting the study participants. Data analysis Data were entered into a Microsoft excel and analyzed using R statistical package (version 3.6.3 R Core Team, Vienna, Austria). Descriptive statistics of frequency and percentages were used to determine the awareness and attitude of the doctors towards physiotherapy in obstetrics and gynecology. Chi- square test or Fisher exact test of association was used to compare the association between awareness and categories of the study participants (i.e. position: consultants, senior and junior residents). Logistic regression was used to determine association between the doctors awareness level and years of practice. We adjusted for doctor’s position, sex and age. Statistical significance was set at p < 0.05. The study was a cross sectional study, conducted at the Department of Obstetrics and Gynecology at the Korle Bu Teaching Hospital (KBTH), in Accra, Ghana. KBTH is the largest tertiary Hospital in the southern part of Ghana, and an accredited training facility for both the West African and Ghana College of Physicians and Surgeons. It is currently the leading national referral center in Ghana comprising various medical specialty departments. The Department of Obstetrics and Gynecology of the hospital is divided into five units (Teams A, B, C, D and E) and has a bed capacity of 372 (97 and 275 beds for Gynecology and Obstetrics respectively . A senior consultant heads each unit with other consultants and doctors (comprising senior residents, junior residents and house officers) equally distributed among the various units. The consultants are usually permanent while the other doctors (mainly residents) rotate through the teams. Each unit has its specific clinic, theatre and grand ward round days. The participants for the study were medical doctors (consultants, senior residents and junior residents) in obstetrics and gynecology who were working at the hospital. A consultant obstetrician/gynecologist is a medical doctor of the highest rank who deals with women’s health problems relating to the female reproductive system. Senior and junior residents are medical doctors in residency training to become consultants and specialists respectively. At the time of the study, there were 17 consultants, 23 senior residents and about 55 junior residents in the Department of Obstetrics and Gynecology, KBTH. The inclusion criteria were obstetricians/gynecologists (consultants) and clinicians who were pursuing their residency training program and had worked at the Department of Obstetrics and Gynecology for at least one year. Specific exclusion criteria were failure to provide informed consent, house officers undertaking their rotations at the Department of Obstetrics and Gynecology and resident doctors who had spent less than 12 months into their residency training. Also, doctors who were on leave were excluded from the study. Prior to the data collection, a formal protocol presentation was done at the Department of Obstetrics and Gynecology to all the clinical staff including the doctors, nurses and midwives during one of the their clinical meetings. Convenient sampling method was used in recruiting the study participants based on their accessibility, availability at the time of the study and willingness to participate. The KBTH was chosen as the study site because it is the largest residency training center for obstetrics and gynecology in Ghana and manages high number of obstetric and gynecological cases. Based on the nature of the study, the sampling procedure employed all the available and willing doctors (consultants, senior and residents ) working at the KBTH with reference to the inclusion and exclusion criteria. An “Awareness and Attitude Questionnaire” was adapted from a standardized questionnaire used in a previous work in the subregion . The questionnaire comprised three sections exploring socio-demographic characteristics, awareness of physiotherapy, attitude and factors influencing utilization of physiotherapy among obstetricians and gynecologists. The awareness of physiotherapy in obstetrics and gynecology had responses as: ‘yes’, ‘no’ and ‘not sure’ while the attitude scale was in Likert (‘strongly agree’, ‘agree’, ‘somewhat’, ‘strongly disagree’ and ‘disagree’). The study participants were given the questionnaire to complete during their regular morning meetings held in the conference room at the maternity unit from Monday to Friday. Some of the questionnaire were also distributed at the obstetric clinic, gynecology clinic and on their respective wards. Participation in the study was voluntary and participants were informed that they were free to withdraw from the study at any time. Return visits and contact follow up were used to collect the completed questionnaires from doctors who were not able to complete the questionnaire immediately. Figure indicates the flow chart for recruiting the study participants. Data were entered into a Microsoft excel and analyzed using R statistical package (version 3.6.3 R Core Team, Vienna, Austria). Descriptive statistics of frequency and percentages were used to determine the awareness and attitude of the doctors towards physiotherapy in obstetrics and gynecology. Chi- square test or Fisher exact test of association was used to compare the association between awareness and categories of the study participants (i.e. position: consultants, senior and junior residents). Logistic regression was used to determine association between the doctors awareness level and years of practice. We adjusted for doctor’s position, sex and age. Statistical significance was set at p < 0.05. Over the study period, sixty-one (64.2%) obstetricians/gynecologists (out of a total of 95) including resident doctors participated comprising, 7 (11.5%) consultants, 20 (32.8%) senior residents and 34 (55.7%) junior residents. The median age of the doctors was 35 years (range: 29-65years) and the mean (± SD) duration of practice was (9.41 ± 4.71) years (Range: 4 and 35). Most of the participants were residents [54 (88.5%)] with 7 consultants constituting 11.5%. There were 50 (82.0%) and 11 (18.0%) male and female doctors respectively with most of them [38 (62.3%0] having practiced medicine for 6 to 10 years (Table ). Among the doctors 55.7% ( n = 34) and 44.3% ( n = 27) had practiced for less than 10 and 10 or more years respectively. Awareness of physiotherapy’s role in obstetrics and gynecology Majority of the doctors showed high awareness of the role of physiotherapy in all categories of obstetric care ranging from 72.1 to 91.8% with postnatal period being the highest. There were mixed results for awareness concerning the role of physiotherapy in specific or selected gynecological conditions (Table ). Over 95% ( n = 58) of the doctors reported the need for physiotherapy in managing uterine prolapse whiles 19.7% ( n = 12) were aware of the role that physiotherapy plays in the management of pelvic inflammatory disease. There was a moderate awareness of 57.4% ( n = 35) reported for role of physiotherapy following hysterectomy. Attitude towards physiotherapy in obstetric and gynecology In general, the doctors thought physiotherapists are proficient in the obstetrics and gynecology rehabilitation team suggestive of positive attitude. However, there were specific areas where attitudes were judged negative. For instance, only 12 participants (19.7%) strongly agreed to physiotherapists’ involvement during childbirth whiles 25 (41.0%) agreed [thus, 60.7% ( n = 37) agreed in total]. This indicates that approximately 40% do not agree to the need for the involvement of physiotherapists during childbirth. Also, 25 (41%) indicated that physiotherapists have been effective in their inter-professional relationship (Table ). Most of the obstetricians 56 (91.8%) and 54 (88.5%) alluded to the relevance of physiotherapy in gynecology and obstetrics respectively. None of the doctors strongly agreed that physiotherapy may not contribute significantly to complete well-being of gynecological patients although 1.6% ( n = 1) agreed. Also, 2 doctors (3.3%) agreed that physiotherapy may not contribute significantly to complete well-being of obstetric patients. Table indicates the full spectrum of doctors’ attitudes towards physiotherapy in obstetrics and gynecology. Factors associated with utilization of physiotherapy Overall, the doctors reported to have positive influences on their use of physiotherapy. However, the main factors influencing their utilization of physiotherapy was physiotherapists’ non-availability in enough numbers to cover the obstetrics and gynecology wards. According to the study participants, only 6.6% ( n = 4) of physiotherapists attend ward rounds with doctors (Table ). Similarly, 6.6% ( n = 4) indicated that there were enough physiotherapist to cover both obstetrics and gynecology wards. Concerning previous working experience with physiotherapy, 50.8% ( n = 31) and 45.9% ( n = 28) had worked with physiotherapists in managing obstetric and gynecological patients respectively. On the other hand, 44.3% ( n = 27) and 47.5% ( n = 29) had not previously worked with physiotherapist in managing obstetric and gynecological patients. Association between doctors’ category and awareness level of physiotherapy in obstetrics and gynecology There were important findings relating doctors categories and their awareness of the role physiotherapy in obstetrics and gynecology (Table ). Generally, consultants had more awareness levels on the role of physiotherapy in antenatal care compared to senior residents and junior residents (85.7% versus 80.0% and 78.8% respectively). On the other hand, senior residents reported higher awareness in parturition or childbirth compared to consultants (85.0% versus 71.4%) and postnatal (100.0% versus 85.7% respectively). In terms of gynecology, consultants generally showed higher awareness compared to senior and junior residents in the management of PID (28.6% versus 25.0 and 15.2% respectively), hysterectomy (85.7% versus 55.0% and 52.9% respectively) and cervical incompetence (28.6% versus 20.0% and 26.5% respectively). However, there was no statistical differences between the consultants and residents concerning the awareness of the physiotherapy’s role in women’s health. Postnatal period and uterine prolapse were excluded from the logistic regression because the participants reported overwhelming relevance of physiotherapy in their management (91.8% and 95.1% respectively). Years of practice for ten years or more was associated with 3.5 times increased odds of doctor’s awareness concerning the role of physiotherapy during childbirth (OR=3.560, 95%CI: 1.070-14.220) in the unadjusted model (Table ). However, the significance disappeared in the adjusted model. Similarly, practicing for ten years or more showed increased tendency for high awareness of the role of physiotherapy following hysterectomy, however, this did not reach statistical significance in both the unadjusted and adjusted models. Majority of the doctors showed high awareness of the role of physiotherapy in all categories of obstetric care ranging from 72.1 to 91.8% with postnatal period being the highest. There were mixed results for awareness concerning the role of physiotherapy in specific or selected gynecological conditions (Table ). Over 95% ( n = 58) of the doctors reported the need for physiotherapy in managing uterine prolapse whiles 19.7% ( n = 12) were aware of the role that physiotherapy plays in the management of pelvic inflammatory disease. There was a moderate awareness of 57.4% ( n = 35) reported for role of physiotherapy following hysterectomy. In general, the doctors thought physiotherapists are proficient in the obstetrics and gynecology rehabilitation team suggestive of positive attitude. However, there were specific areas where attitudes were judged negative. For instance, only 12 participants (19.7%) strongly agreed to physiotherapists’ involvement during childbirth whiles 25 (41.0%) agreed [thus, 60.7% ( n = 37) agreed in total]. This indicates that approximately 40% do not agree to the need for the involvement of physiotherapists during childbirth. Also, 25 (41%) indicated that physiotherapists have been effective in their inter-professional relationship (Table ). Most of the obstetricians 56 (91.8%) and 54 (88.5%) alluded to the relevance of physiotherapy in gynecology and obstetrics respectively. None of the doctors strongly agreed that physiotherapy may not contribute significantly to complete well-being of gynecological patients although 1.6% ( n = 1) agreed. Also, 2 doctors (3.3%) agreed that physiotherapy may not contribute significantly to complete well-being of obstetric patients. Table indicates the full spectrum of doctors’ attitudes towards physiotherapy in obstetrics and gynecology. Factors associated with utilization of physiotherapy Overall, the doctors reported to have positive influences on their use of physiotherapy. However, the main factors influencing their utilization of physiotherapy was physiotherapists’ non-availability in enough numbers to cover the obstetrics and gynecology wards. According to the study participants, only 6.6% ( n = 4) of physiotherapists attend ward rounds with doctors (Table ). Similarly, 6.6% ( n = 4) indicated that there were enough physiotherapist to cover both obstetrics and gynecology wards. Concerning previous working experience with physiotherapy, 50.8% ( n = 31) and 45.9% ( n = 28) had worked with physiotherapists in managing obstetric and gynecological patients respectively. On the other hand, 44.3% ( n = 27) and 47.5% ( n = 29) had not previously worked with physiotherapist in managing obstetric and gynecological patients. Overall, the doctors reported to have positive influences on their use of physiotherapy. However, the main factors influencing their utilization of physiotherapy was physiotherapists’ non-availability in enough numbers to cover the obstetrics and gynecology wards. According to the study participants, only 6.6% ( n = 4) of physiotherapists attend ward rounds with doctors (Table ). Similarly, 6.6% ( n = 4) indicated that there were enough physiotherapist to cover both obstetrics and gynecology wards. Concerning previous working experience with physiotherapy, 50.8% ( n = 31) and 45.9% ( n = 28) had worked with physiotherapists in managing obstetric and gynecological patients respectively. On the other hand, 44.3% ( n = 27) and 47.5% ( n = 29) had not previously worked with physiotherapist in managing obstetric and gynecological patients. There were important findings relating doctors categories and their awareness of the role physiotherapy in obstetrics and gynecology (Table ). Generally, consultants had more awareness levels on the role of physiotherapy in antenatal care compared to senior residents and junior residents (85.7% versus 80.0% and 78.8% respectively). On the other hand, senior residents reported higher awareness in parturition or childbirth compared to consultants (85.0% versus 71.4%) and postnatal (100.0% versus 85.7% respectively). In terms of gynecology, consultants generally showed higher awareness compared to senior and junior residents in the management of PID (28.6% versus 25.0 and 15.2% respectively), hysterectomy (85.7% versus 55.0% and 52.9% respectively) and cervical incompetence (28.6% versus 20.0% and 26.5% respectively). However, there was no statistical differences between the consultants and residents concerning the awareness of the physiotherapy’s role in women’s health. Postnatal period and uterine prolapse were excluded from the logistic regression because the participants reported overwhelming relevance of physiotherapy in their management (91.8% and 95.1% respectively). Years of practice for ten years or more was associated with 3.5 times increased odds of doctor’s awareness concerning the role of physiotherapy during childbirth (OR=3.560, 95%CI: 1.070-14.220) in the unadjusted model (Table ). However, the significance disappeared in the adjusted model. Similarly, practicing for ten years or more showed increased tendency for high awareness of the role of physiotherapy following hysterectomy, however, this did not reach statistical significance in both the unadjusted and adjusted models. In this hospital-based study, the obstetricians/gynecologists demonstrated high awareness of the role of physiotherapy in obstetrics (between 72.1 and 91.8%) in all the aspects of maternal care with the highest occurrence associated with postnatal care. This is consistent with the 68% of awareness regarding postnatal exercises determined by Munawar et al. in Pakistan . For specific gynecological conditions, mixed findings were determined (between approximately 20–95%), awareness was highest in uterine prolapse and lowest in pelvic inflammatory disease. Uterine prolapse is a likely complication of childbirth from weakness in the pelvic floor muscles, and physiotherapy as a conservative management in the form regular pelvic floor exercises can be initiated in the immediate postpartum period . Hence, the finding of high awareness of physiotherapy’s role in postnatal care and uterine prolapse treatment is appreciable. A similar study conducted in Nigeria also reported high awareness levels in postnatal care and uterine prolapse . It is however important to emphasize that high level of awareness of physiotherapy relevance in maternal health is not directly translated to optimal clinical utilization in terms making timely referrals of postnatal mothers for physiotherapy services. Intriguingly, major variation in level of awareness was determined among the categories of doctors in this study. For instance, consultants demonstrated the highest (85.7%) awareness of the role of physiotherapy in antenatal care and most gynecological conditions compared to the residents (78.8%). The high level of awareness regarding the role of physiotherapy in women’s health demonstrated by the consultants is partly attributed to their extended duration of practice, varied clinical exposures and experience. In Ethiopia, Kutty reported similar findings and attributed the level of awareness to clinical experience and longer period of exposure to cases requiring physiotherapy . Previous studies have determined that doctors’ characteristics such as years in practice greatly influence their level of awareness . In our study, the duration of clinical practice (≥ 10years) significantly increased the odds of doctors’ awareness regarding the importance of physiotherapy in childbirth (odd ratio = 3.5) only but not in other clinical areas. However, the statistical significance disappeared after adjusting for the relevant confounders. Therefore, further research with a larger sample size is recommended to evaluate this association. Regarding attitude towards physiotherapy, majority of the obstetricians had a positive attitudes towards physiotherapy although areas of negative attitudes were also recorded. For instance, only 19.7% of obstetricians strongly agreed to physiotherapists involvement during labour. This finding partly accounts for the low level of awareness of the relevance of physiotherapy during parturition as compared to the other categories in maternal health care. The finding of low awareness on the part of obstetricians concering the need for active participation of physiotherapists in the management of labour and delivery is intriguing. Generally, continuous support during childbirth is strongly recommended for women because of its association with improved birth outcomes and physiotherapists are recommended as major contributors. Likewise, it is imperative that physiotherapy services are made freely available to women in labour to reinforce the education received during antenatal period and supplement the non-pharmacological pain management in labour. Furthermore, over 30% of the obstetricians/gynecologists disagreed that physiotherapists had been effective in their inter-personal relationship with other health professionals. This finding may be due to complaints raised about physiotherapists’ infrequent availability at ward rounds. The opinions of the doctors on physiotherapy practice clearly reveals their inherently low impression about the professional scope of physiotherapy. There is the need for physiotherapists to create more awareness regarding the scope of physiotherapy in the multidisciplinary team comprising obstetricians/gynecologists, nurses, and midwives. Inter-professional education may improve collaboration among members of the multidisciplinary team and facilitate effective and efficient team work resulting in improved quality of care . More recently, Goyekar and Shah recommended that regular professional communication and improved interaction between obstetricians/gynaecologists and physiotherapists may improve the utilization of physiotherapy in women’s health . In a similar study in Nigeria, Odunaiya et al concluded that having high awareness does not necessarily translate into having positive attitude . In this study, factors influencing utilization of physiotherapy services were explored and most of the spectrum supported the high awareness and utilization physiotherapy in women’ health. This is in line with the study by Sangal et al who reported that knowledge about a service is a very vital factor in determining its utilization . It is important to constantly showcase the availability of the various physiotherapy services in the hospital to the various medical specialties to enhance optimal utilization and early referral for physiotherapy. This will obviously encourage active participation of physiotherapists in all aspect of women’s health where the involvement of physiotherapy services is vital. Most of the obstetricians/gynecologists had previously worked with physiotherapists in the management of obstetric and gynecological patients. This previous working experience accounts partly for their high attitude towards the involvement of physiotherapy in the management of specific obstetric and gynecologic cases. Nevertheless, nearly 50% of the doctors reported that there are limited numbers of physiotherapists to cover the obstetrics and gynecology wards which in turn affects the overall utilization of physiotherapy. The reason for this may be due to low recruitment rate of physiotherapists into government hospitals and lack of adequate number of facilities for training physiotherapists. To buttress this point, only 6.6% of the doctors had ever attended ward rounds with physiotherapists. This suggests that there is suboptimal co-ordination and lack of functioning multi-disciplinary approach to clinical management, resulting in suboptimal quality of care for women’s conditions which require physiotherapy services. There is the need to urgently create more awareness about the critical importance of physiotherapy in women’s health. Clinical and research implications Our study indicates that physiotherapy remains a vital adjunct in the management of common conditions in obstetrics (antenatal, intrapartum and postnatal including post caesarean section) in accordance with other studies and gynecology (surgical and non-surgical) . Figure highlights the common obstetric and gynecological conditions which require physiotherapy services and the available physiotherapeutic modalities . There is urgent need to actively integrate physiotherapy services into women’s health care with regular monitoring and evaluation of its impact on the quality of care women experiences. Proactive integration of coordinated inter-professional education through advocacy and workshops involving the obstetric/gynecological multidisciplinary teams is vital in optimizing utilization of physiotherapy in women’s health care. The need for recruitment of more physiotherapists in government hospitals and to provide continuous professional training opportunities is well acknowledged to ensure improved quality of care in women health. A recent qualitative research showed that several factors influence women’s adherence to pelvic floor exercises and these include effective physiotherapy programs, their personal experiences, awareness or beliefs and professional feedback . This evidence supports the immense role of physiotherapists to women’s health and the urgent need for its optimal integration to improve the quality of care for women requiring such adjunctive care. Further research (including qualitative design) of high methodological quality relating to the role of physiotherapy in the practice of obstetrics and gynecology is strongly recommended. Special areas of research include assessing implementation challenges associated with regular utilization of physiotherapy services in women’s health. In additions, research involving the opinions of relevant stakeholders including women and other health professionals (nurses and midwives) is recommended to facilitate efficiency of physiotherapy practice in women’s health. The strength of the study relates the fact that it is the first study conducted to assess, obstetricians/gynecologists’ awareness, attitudes and utilization of physiotherapy in women’s health in Ghana. The findings will serve as a baseline information for further studies on physiotherapy in women’s health. The small numbers of study participants involved in the study constitutes a major limitation and might have influenced the findings determined. The study employed mostly close-ended questions which narrowed the doctors opinions and concerns about the physiotherapy profession. The doctors could not describe their own experiences concerning the role of physiotherapy in women’s health as qualitative research design would have offered and this constitute a significant limitation. Also, non-inclusion of other health professionals such as nurses and midwives providing maternity care services is considered a limitation of the study as the responses by only the doctors might be skewed. Our study indicates that physiotherapy remains a vital adjunct in the management of common conditions in obstetrics (antenatal, intrapartum and postnatal including post caesarean section) in accordance with other studies and gynecology (surgical and non-surgical) . Figure highlights the common obstetric and gynecological conditions which require physiotherapy services and the available physiotherapeutic modalities . There is urgent need to actively integrate physiotherapy services into women’s health care with regular monitoring and evaluation of its impact on the quality of care women experiences. Proactive integration of coordinated inter-professional education through advocacy and workshops involving the obstetric/gynecological multidisciplinary teams is vital in optimizing utilization of physiotherapy in women’s health care. The need for recruitment of more physiotherapists in government hospitals and to provide continuous professional training opportunities is well acknowledged to ensure improved quality of care in women health. A recent qualitative research showed that several factors influence women’s adherence to pelvic floor exercises and these include effective physiotherapy programs, their personal experiences, awareness or beliefs and professional feedback . This evidence supports the immense role of physiotherapists to women’s health and the urgent need for its optimal integration to improve the quality of care for women requiring such adjunctive care. Further research (including qualitative design) of high methodological quality relating to the role of physiotherapy in the practice of obstetrics and gynecology is strongly recommended. Special areas of research include assessing implementation challenges associated with regular utilization of physiotherapy services in women’s health. In additions, research involving the opinions of relevant stakeholders including women and other health professionals (nurses and midwives) is recommended to facilitate efficiency of physiotherapy practice in women’s health. The strength of the study relates the fact that it is the first study conducted to assess, obstetricians/gynecologists’ awareness, attitudes and utilization of physiotherapy in women’s health in Ghana. The findings will serve as a baseline information for further studies on physiotherapy in women’s health. The small numbers of study participants involved in the study constitutes a major limitation and might have influenced the findings determined. The study employed mostly close-ended questions which narrowed the doctors opinions and concerns about the physiotherapy profession. The doctors could not describe their own experiences concerning the role of physiotherapy in women’s health as qualitative research design would have offered and this constitute a significant limitation. Also, non-inclusion of other health professionals such as nurses and midwives providing maternity care services is considered a limitation of the study as the responses by only the doctors might be skewed. Most of the obstetricians and gynecologists showed high awareness levels towards physiotherapy services in women’s health. Overall, the consultants showed high awareness levels compared to the resident doctors in antenatal and gynecological care whiles senior residents had more awareness in intrapartum and postnatal care, although these were not statistically significant. Junior residents generally showed the lowest awareness levels compared to consultants and senior residents. Clinical practice duration ≥ 10 years was not significantly associated with increased the odds of doctors’ awareness concerning the relevance of physiotherapy in childbirth and other clinical areas. There was mixed findings concerning the doctors’ attitudes toward physiotherapy in women’s health. Factors influencing the utilization of physiotherapy services include non-availability of enough physiotherapists and failure of physiotherapists to attend ward rounds to enhance education on the scope of physiotherapy practice in women’s health. |
Accuracy of lung ultrasound performed with handheld ultrasound device in internal medicine: an observational study | 9b2c2271-b6d7-465f-8549-94f596990a5e | 11496455 | Internal Medicine[mh] | In recent years, lung ultrasound (LUS) has emerged as a reliable and rapid tool for the evaluation of patients with pulmonary diseases . LUS, plus the inferior cava vein (ICV) assessment, can improve the diagnosis of many cardiopulmonary conditions such as pleural effusion, interstitial lung disease and pneumonia; moreover, LUS can guide procedures (i.e. thoracentesis), drive therapeutic timing and dosage (i.e. diuretic therapy) and it is a valid instrument for monitoring and prognosis of patients with heart failure . In the last few years, the development and spread of knowledge in the field of LUS and the expansion of inexpensive and handy tablet or smartphone/tablet format devices, have made the point-of-care ultrasound (POCUS) approach become a cornerstone in the evaluation of patients with respiratory symptoms. Nowadays, LUS is a fundamental supplement to the medical examination. Indeed, pocket-sized devices are frequently used in the bedside evaluation of hospitalized patients, however, their use is rapidly increasing even in ambulatory settings to answer simple clinical questions especially thanks to their low cost and good performance profiles . These days, ultrasound is considered an essential aspect of bedside examination since it can help to rapidly frame the patient and accelerate the diagnostic therapeutic pathway. However, despite the widespread of poket-sized devices thanks to new technologies, with a major boost for their development during the SARS-CoV2 pandemic, to date there are no solid data to support the interchangeable use between high-end ultrasound devices (HEUSDs) and handheld ultrasound devices (HHUSDs), except for few papers comparing the two methods . Despite technological advancements enabling the development of increasingly efficient pocket-sized ultrasound machines, the literature still reports lower performance of these devices in terms of image quality compared to HEUSDs. This discrepancy is attributed to various factors, including lower spatial resolution, reduced contrast and higher levels of noise. Specifically, the literature highlights a reduced penetration depth of the ultrasound beams produced by these devices . Thus, the aim of our work was to evaluate the accuracy of LUS performed with HHUSD compared to HEUSD in patients admitted to our ward for heart failure or pneumonia and to determine if the pocket-sized ultrasound approach has advantages in terms of saving costs and time. Then, we considered whether obesity may be a limiting condition for HHUSDs due to increased fat layer and reduced penetration depth of the sound beams of these machines. We conducted in a single center an observational study involving adults hospitalized in the Department of Internal Medicine 4 of the Careggi University Hospital in Florence. Over 6 months, 72 patients were enrolled. For each patient demographic, clinical and laboratory data were recorded. The enrolled patients underwent LUS plus the evaluation of ICV, when indicated, both performed with the HHUSD Vscan Extend Dual probe (GE Healthcare) with a phased array transducer (1.7–3.8 MHz) and a linear (3.3–8 MHz) transducer. and the HEUSD Vivid T8 (GE Healthcare) with a convex transducer (3.5–7 MHz). Ultrasound evaluations were performed independently by two different operators experienced in LUS, at closely spaced times (a maximum of 15 min apart) using a standardized imaging protocol . Every patient was scanned in the supine and sitting position and, for each of the 58 areas examined, the following data were registered: number of B-lines, quantified as suggested by the literature , the total number of B-lines resulting from the evaluation of each of the spaces explored for the antero-lateral chest and for the posterior chest, the presence of pleural effusion and/or lung consolidation. Moreover, in patients with heart failure, inferior cava vein ectasia (diameter > 20 mm) and its respiratory excursions, defined as maintained if greater than 50% of the diameter, were recorded. Finally, the duration of each examination was registered. The data obtained by the two types of ultrasounds were then compared, and it was assessed whether there was overlap between the findings identified by the two different methods. The number of B-lines for each field explored was judged to be overlapping if there was a numerical difference ≤ 2. Statistical analysis Continuous normal variables have been expressed as mean and standard deviation (SD), and non-normal variables as median (minimum value-maximum value). Categorical variables have been expressed as number and percentage. Comparison between groups have been performed with the chi-square test for dichotomous variables. Wilcoxon test have been performed for paired continuous variables for comparison of HEUSD and HHUSD. A p -value < 0.05 have been considered statistically significant. All statistical analyses have been performed using SPSS software version 20.0 (IBM, Armonk, New York, USA). Continuous normal variables have been expressed as mean and standard deviation (SD), and non-normal variables as median (minimum value-maximum value). Categorical variables have been expressed as number and percentage. Comparison between groups have been performed with the chi-square test for dichotomous variables. Wilcoxon test have been performed for paired continuous variables for comparison of HEUSD and HHUSD. A p -value < 0.05 have been considered statistically significant. All statistical analyses have been performed using SPSS software version 20.0 (IBM, Armonk, New York, USA). Seventy-two patients with a median age of 80 years (range 28–99), among which 39 males (54%), were enrolled in the study. Admission diagnosis to our ward were heart failure (68%) or pneumonia (32%). The principal comorbidities were hypertension (72%), atrial fibrillation (40%), type II diabetes (24%), dyslipidemia (38%), chronic obstructive pulmonary disease (24%), chronic kidney failure (22%) and coronary heart disease (20%) (Fig. ). 94% of patients had at least one of the following cardiovascular risk factors: obesity, smoking, hypertension, diabetes, dyslipidemia. The most frequently prescribed medications upon admission to the ward were: diuretics (60%), beta-blockers (54%), calcium-antagonists (28%), direct anticoagulants (28%), angiotensin converting enzyme (ACE) inhibitors (20%), sartans (16%), and warfarin (4%). In 70% of the cases, physical examination revealed the presence of wet sounds upon chest auscultation, while in the remaining 30% bronchial obstruction sounds; moreover, 55% of patients had swollen limbs or feet. The main laboratory alterations upon admission to the ward were an increase in NT-proBNP, troponin, and C-reactive protein (Table ). The comparison between the HHUS and the HEUS evaluation showed a concordance rate of 79.3% ± 17.7 (mean ± SD) for the detection of B-lines, 88.6% for pleural effusion and 82.3% for lung consolidations. Concordance rate between the two methods in the evaluation of ICV ectasia and its respiratory excursions were 88.7% and 84.9%, respectively. BMI was available for 69 out of the 72 patients, in which 20% had BMI > 30 kg/m 2 . In this subgroup of patients, the concordance rate between the 2 methods was 78.9% ± 12.6 for the detection of B-lines, 86.5% for pleural effusion, 79.5% for consolidations, 86.3% and 85.8% for the evaluation of ICV ectasia and its respiratory excursions respectively. Between the two groups (patients with BMI > 30 kg/m 2 and patients with BMI < 30 kg/m 2 ), there were no statistically significant differences ( p = 0,643) in LUS and ICV evaluation. Data about concordance rates between the HHUSD and the HEUSD evaluation are reported in Table . The average time taken to perform the evaluations (expressed as mean ± SD) was 8 ± 1.5 min with the HHUSD and 10 ± 2.5 min with the HEUSD, with a statistically significant difference ( p < 0.0001). Bedside ultrasound has become a fundamental diagnostic tool in Internal Medicine care setting, and it has profoundly changed the clinical practice and the approach to patients’ evaluation and management. In particular, LUS is an essential part of POCUS which is increasingly considered as an extension of the physical examination, leading to the modern concept of ultrasound-assisted patients examination . POCUS is, indeed, able to acceleratethe diagnostic and therapeutic process and to guide the management of inpatients in Internal Medicine settings. In recent years, pocket-sized ultrasound devices are increasingly used in many different clinical settings given their low cost and technological evolution. HHUSDs present several advantages such as their low costs and rapid use, their feasibility and the possibility of always having them at hand due to their small size, that have contributed to their widespread use in numerous hospital and non-hospital settings . Moreover, the use of a pocket-sized, lightweight, and handy device in everyday clinical practice can provide the operator with greater ergonomics due to the smaller spatial footprint, and this may contribute to making healthcare professionals’ movements more comfortable . The use of a pocket-sized device, in daily clinical-assistance activities, can then help reducing physical stress for sonographers by reducing repetitive strain injuries that, to date, are becoming a perceived, and sometimes, disabling issue for physicians performing numerous ultrasound evaluations due to having non-ergonomic positions in environments where space is often limited. However, the performance of HHUSDs in everyday clinical practice and their role in the evaluation of patients with heart failure and pneumonia hasn’t still been clearly defined. Our data show that the accuracy of LUS performed with HHUSD is high when compared with HEUSD in the evaluation of patients with heart failure/lung disease. In particular, pocket-sized device confirms its accuracy in the evaluation of B-lines and pleural effusion (Fig. ). Moreover, in our study even the evaluation of ICV ectasia and its respiratory excursions is accurate with HHUSD compared to HEUSD. ICV evaluation, together with LUS, is fundamental in the management of patients with heart failure for diagnosis, monitoring and even during the follow up and to adjust diuretic therapy. To the best of our knowledge, this is the first study in which both LUS and ICV findings with pocket-sized ultrasound device are evaluated in comparison with a HEUSD. The substantial overlap, in the reported data, highlights the validity of the assessments performed with the pocket-sized ultrasound devices, thus confirming the reliability of this technique in the management of patients with heart failure. Moreover, there was a substantial correspondence even in the evaluation of lung consolidations; indeed, despite the limits of LUS for the assessment of consolidations , HHUSD can be a valid option for the management of patients with pneumonia. In addition, the average time to perform the examination with the pocket-sized device is less than the time taken with the high-end machine, which can contribute to faster and more efficient ultrasound assessment at the patient’s bedside. Finally, our study shows that the accuracy of pocket-sized ultrasound devices is still relevant in the evaluation of patients with higher BMIs, making these portable instruments reliable also in this subgroup of patients. Our findings support the interchangeable use of HHUSD and HEUSD in the management of inpatients with heart failure and pneumonia. Our study has some limitations; indeed, all the ultrasound examinations were performed by expert lung ultrasound operators, and this may have contributed to increasing the accuracy of ultrasound examinations performed with the HHUSD. Then, our study has been conducted in a single center and the sample size is small, thus further studies will be needed to confirm the interchangeability of the two methods in larger populations. In conclusion, this study shows that LUS, performed with a pocket-sized ultrasound device has high accuracy with regard to the detection of B-lines, pleural effusion and lung consolidations. Moreover, it allows adequate assessments of inferior cava vein. The accuracy remains high also in obese patients. Handheld ultrasound devices can be confidently used for the management of inpatients with heart failure and/or pneumonia. Given the usefulness of these tools, specific operators’ training should be developed and encouraged to spread the HHUS devices use in clinical practice. |
Responding to the workforce crisis: consensus recommendations from the Second Workforce Summit of the American Society of Pediatric Nephrology | 38573e3d-bafc-49bc-8b74-086f2b3c26d8 | 11511730 | Internal Medicine[mh] | The American Society of Pediatric Nephrology (ASPN) is the leading voice of pediatric nephrology in North America. Its primary goal is to advance the care for children, adolescents, and young adults with kidney disease through advocacy, education, research, and workforce development. Compelled by a persistent and growing pediatric nephrology workforce crisis, the ASPN convened a second Workforce Summit (Workforce Summit 2.0). The first Workforce Summit, held in 2019, demonstrated the urgent need for equitable reimbursement as well as recruitment and retention strategies to ensure a sustainable, robust, and diverse pediatric nephrology workforce . As a result, the ASPN has made concerted policy and advocacy efforts; however, the workforce crisis not only persists but has worsened in the past 4 years . In the USA, pediatric nephrology fellowships are 3 years in duration, with approximately 1/3 of the time focused on clinical care and the remaining time focused on scholarly projects (e.g., basic science research, clinical research, and quality improvement projects). Upon completion of training, most nephrologists enter the academic workforce, and despite the emphasis on research and scholarly projects during their training, the majority of nephrologists spend the bulk of their time performing clinical care. Despite the length of training, which is equivalent to many other pediatric sub-specialty fellowships (e.g., intensive care, neonatology, and cardiology), the salary benchmarks of pediatric nephrologists, controlling for academic rank and geographic region, are lower than most other pediatric sub-specialties. Nephrologists, however, are not alone in this as several pediatric sub-specialties (e.g., infectious disease) earn lower salaries than general pediatricians . Within this landscape, in 2023 46% of pediatric nephrology fellowship positions went unfilled, and pediatric nephrology positions remain the lowest filled of all pediatric sub-specialties from 2014 to 2022 at only 65.7% final fill rate . A detailed projection of future workforce needs by the American Board of Pediatrics anticipates growing demand and widening geographic disparities in the pediatric nephrology workforce from 2020 to 2040 . The objectives of the Summit were to identify current knowledge gaps and outline concrete next steps to make progress on issues that have persistently challenged the pediatric nephrology workforce. The committee recognizes that other pediatric sub-specialties face similar challenges in workforce recruitment, retention, and reimbursement . Children receive optimal care when they have access to providers who have been trained specifically to care for children, yet an estimated 2–53% of children live > 80 mi. away from pediatric sub-specialty care . Advocacy beyond one sub-speciality is not only warranted but essential in order to optimize the care that pediatric community provides to children . The committee also recognizes that some of the workforce issues discussed herein are specific to the unique practice environment of the USA (e.g., reimbursement/salary); however, many of the issues are more broadly applicable to pediatric nephrologists practicing around the world (e.g., garnering institutional support and recruitment of trainees) . The Workforce Summit 2.0 employed a round table format and methodology for consensus building using adapted Delphi principles . Content domains were identified via input from the ASPN’s Workforce Committee, 2023 Strategic Plan survey, Pediatric Nephrology Division Directors survey, and ongoing feedback from members. The organizing committee comprised the ASPN President and Workforce Committee Chair. In order to create the content domains and organize the working groups, the organizing committee collated the feedback and identified themes. Five themes were identified, including definition of full-time effort, non-billable work, obtaining institutional support for a robust pediatric nephrology service, salary equity, and recruitment and retention of the workforce. The key controversy was identified for each domain and turned into a question for the working group to answer. The organizers invited 28 faculty comprising diverse career types according to their topic-related expertise. Diverse interpersonal representation was also sought out, including considerations given to gender, age, race, ethnicity, and LGBTQ + status. Once assembled, faculty was separated into the five working groups to focus on the assigned question for their domain: (1) What is the definition of a 1.0 clinical full-time equivalent (cFTE) in pediatric nephrology?, (2) Would the utilization of academic relative value units (RVUs) for non-billable work improve upon current metrics for pediatric nephrologists’ work?, (3) What is the institutional value of a pediatric nephrology program?, (4) What does salary equity look like for pediatric nephrology?, and (5) What are the pathway considerations for growth of the pediatric nephrology workforce? Working groups met prior to the Summit via conference calls to conduct an organized literature review and establish key questions to be addressed. The Summit was held in-person in Philadelphia, Pennsylvania in November 2023. During the Summit, work groups presented their preliminary findings, and the at-large group developed the key action statements and future directions presented herein. Group 1: What is the definition of a 1.0 cFTE in pediatric nephrology? Consensus statement 1a Clinical full time equivalent (cFTE) includes all billable and non-billable activities related to providing high-quality clinical care for children with kidney disease. Consensus statement 1b Each pediatric nephrology program determines the appropriate makeup of inpatient and outpatient work that best suits their specific patient population and clinical mission and balances the priorities of providing safe and effective care with workforce equity and well-being. Rationale The variability of the clinical work of pediatric nephrologists in different hospital systems renders it difficult to quantify and standardize cFTE using typical calculations (i.e., shifts and clinics) . The work performed by a pediatric nephrologist may include procedural and cognitive components, inpatient coverage and outpatient clinics, and overnight call with potential for life-saving emergency procedures. In addition, the high medical complexity of pediatric nephrology patients requires multidisciplinary collaboration, attention to primary and preventive care designed to slow the progression of kidney disease, and frequent detailed patient and family conversations to ensure sufficient understanding of their child’s disease. The 24-h call coverage entails significant after-hours physician input, often with provision of emergent dialysis which requires physician presence during treatment and decision-making about organ suitability for pediatric transplant candidates. The relative lack of compensation proportional to the perceived workload has been identified as an important root cause of the pediatric nephrology workforce crisis . We recommend that a comprehensive analysis be performed to describe the time and effort required for a discrete block of clinical work that encompasses both inpatient and outpatient responsibilities. This analysis would holistically evaluate the work required for a standard 4-h half-day outpatient pediatric nephrology clinic. Specific measures for outpatient analysis would consist of time spent during direct, face-to-face patient interaction, and the additional workload outside the exam room relevant to patient care, including clinic preparation time, order entry, post-clinic documentation, and laboratory/imaging management. A similar analysis can be performed for inpatient pediatric nephrology service coverage. Specific measures for inpatient analysis would include the time spent during direct, face-to-face patient interaction, documentation with laboratory/imaging management, hand-off communication, and after-hours call burden including frequency of transplant organ offer calls for patients awaiting kidney transplantation, and emergent dialysis which requires physician presence during treatment. Data obtained from these analyses could be compared to adult nephrology to better understand the relative workload. A pilot study is also proposed using electronic health record (EHR) data analytics and self-report, as well as time-motion analysis, to collect data in granular detail. Additionally, we recommend collecting work data related to key clinical leadership and/or administrative roles (Dialysis Medical Director, Transplant Medical Director, Acute Care Nephrology Director, etc.) which should be included in the FTE description. The committee recognizes that the practice of pediatric nephrology varies broadly across institutions and regions. Important programmatic variables that can impact workload include the number of practicing nephrologists at the program, presence of fellows, residents, and/or advanced practice providers, catchment area and population size, hospital volumes and case mix index, presence/availability of pediatric dialysis and kidney transplantation, local resources, and presence of multidisciplinary programs that require pediatric nephrology expertise (i.e., level 1 Trauma designation, solid organ and bone marrow transplant programs, high-risk obstetric delivery services and level 1 NICU, kidney transplant volume, and the size of the outpatient peritoneal and hemodialysis populations). Given the broad variability that can exist between programs, we caution against the use of benchmarking metrics to define clinical work . In addition, an attempt to determine a universal 50th percentile RVU:cFTE benchmark may perpetuate a “race to the bottom” in which clinicians would be incentivized to spend less time per patient than their peers and may ultimately degrade the quality of care provided . Instead, we propose that individual pediatric nephrology programs use available data to perform detailed and transparent internal work analysis specific to their program and clinical needs. A “one-minus” model could be utilized, in which basic principles describe time allocation to clinical, research, teaching, and administrative activities which sum up to 1.0 FTE . This methodology could subsequently be used to create division-specific worksheets to determine cFTE components which are equitable, fair, and transparent. Example templates could be created by professional societies as “starting points” for small, medium, and large-sized programs through the use of the detailed time-work analyses while recognizing that adaptation of worksheets to fit the needs of the local environment is key to creating a sustainable model for all parties involved. Group 2: Would the utilization of academic relative value units (RVUs) for non-billable work improve upon current metrics for pediatric nephrologists’ work? Consensus statement 2a The effort pediatric nephrologists spend on academic non-revenue generating pursuits including educational, research, and administrative activities can be quantified and factored into determining their available capacity for providing clinical care. Consensus statement 2b A standardized rubric to track achievements in non-clinical academic activities in the areas of education, research, quality improvement, administrative leadership, and division citizenship provides a fair and consistent approach to incentive compensation, if applicable. Rationale Academic physicians routinely dedicate effort to non-revenue generating activities beyond direct patient care. For pediatric nephrologists who practice in academic medical centers, this is an implied expectation to advance core institutional missions including clinical, research, educational, and advocacy goals . While patient care clinical activities are quantified by well-established Current Procedural Terminology (CPT®) codes , compensation for academic pursuits varies . Certain aspects of the pediatric nephrologist’s academic endeavors may be linked to financial compensation and/or protected time. Those include governmental or foundational grant-supported research activities, Accreditation Council for Graduate Medical Education (ACGME)-accredited fellowship program director role and dialysis medical director roles . However, most academic activities are completed at the physician’s own discretion including clinical research activities, mentoring, resident and medical student education, participation in quality improvement projects, and other division, hospital level, or organizational committee leadership roles. Failure to recognize the effort physicians invest into these non-clinical efforts risks physician burnout, job dissatisfaction, and subsequent attrition from the field . While physician compensation models vary and are institution-specific, many may include an end of year incentive payment model that rewards physician clinical productivity . Failure to adapt incentive payment models to quantify and reward academic efforts and move away from purely clinical RVU-based metrics risks stifling academic innovation by shifting physician behavior to focus on clinical revenue-generating patient activities over academic endeavors . Group 3: What is the institutional value of a pediatric nephrology program? Consensus statement 3a Pediatric nephrologists contribute to institutional financial margins in ways that are separate from work RVUs (wRVUs), thus the wRVU system undervalues the effort and indirect income generated by pediatric nephrologists. Consensus statement 3b Availability of a pediatric nephrologist is a prerequisite to many of the high-value medical services offered by institutions. Consensus statement 3c As we move towards quality- and value-based care models, pediatric nephrologists will play a critical role in the financial well-being of medical institutions. Rationale Current wRVU metrics used to estimate the financial value of pediatric nephrologists to their institutions are flawed. The work of pediatric nephrologists, like many less procedurally oriented specialties, is undervalued by the wRVU system . While caring for their primary and consult patients, pediatric nephrologists generate orders and referrals for laboratory testing, medical imaging, surgical procedures, and sub-specialty consultation — none of which are captured by wRVU . Pediatric nephrologists enable institutions to offer a diverse array of medical services. This is especially true for high-margin service lines such as neonatal intensive care, cardiac surgery, solid organ transplant, and oncology/bone marrow transplantation. Furthermore, the care of critically ill children has been incentivized for institutions due to these high-margin service lines. While pediatric nephrologists perform critical roles in the care of these patients (e.g., dialysis procedures or care after organ transplantation), the downstream revenue supports the primary services (critical care) much more than consulting services. Regulatory bodies, accreditation entities, society guidelines, and quality metrics such as the US News World Report rankings track the availability of pediatric nephrology services and kidney replacement therapy programs to determine designations for clinical services at the highest level of care. As more payers move to value-based reimbursement, the increase in cost-savings and efficiency provided by pediatric nephrologists should be recognized . Furthermore, institutions invested in pediatric care would benefit from a heightened awareness of the financial repercussions that may result if the shortage of pediatric nephrologists continues to worsen . In the outpatient setting, the shortage of pediatric nephrologists has resulted in long travel distances for many patients to obtain pediatric nephrology care. This has led to an increase in outreach clinics to better serve the community; however, such clinics place a burden on the workforce in the form of significant travel time, time away from family, and working in clinic environments that may not be able to provide the same level of service as the main practice clinic (e.g., urine microscopy). Group 4: What does salary equity look like for pediatric nephrology? Consensus statement 4a Compensation for pediatric nephrologists will represent the value of kidney care to their organization, including complexity of caring for patients across the spectrum of pediatric care. Further, compensation will reflect the value added by pediatric nephrologists in support of hospital and programmatic missions including margin positive services that require pediatric nephrology expertise. Consensus statement 4b Reimbursement for provision of sub-specialty care to children with kidney disease will accurately reflect the time and effort required to address their complex, multi-system disease manifestations, such as growth, development, and nutritional needs. In such a system, RVU would be adjusted to reflect the complexity and demands of care. This alignment is critical to prevent physician burnout and sustain the workforce. Rationale In the current fee-for-service system, care for children with kidney disease is neither sufficiently valued nor appropriately compensated . The lower compensation for high workload contributes to decreasing trainee interest in pediatric nephrology and affects recruitment and retention of under-represented minorities and non-financially advantaged individuals. Compensation for pediatric nephrologists should be both representative of the value of pediatric kidney care to their organization and reflective of their sub-speciality training. The recent National Academies of Science, Engineering, and Medicine (NASEM) Committee Report on the Pediatric Subspecialty Workforce and Its Impact on Child Health and Well-Being focused in part on the accurate reflection of the time and effort required to care for pediatric sub-specialty children . Future iterations of payment systems and reimbursement should reduce financial disincentives to sub-specialty training and consider the unique value added of pediatric nephrologists both to individual patient care and health systems. For example, high revenue generating programs such as critical care, stem cell transplantation, and cardiothoracic surgery all rely on the expertise of pediatric nephrologists and dialytic therapies . Proposed solutions include increased pediatric representation on agencies that determine current procedural terminology coding and reimbursement. Payment structure should move away from targeting set national benchmarking metrics (often creating a self-perpetuating cycle) and instead focus on value added. This will require the deliberate action by pediatric department chairs, children’s hospitals/health system chief executive officers, and medical college deans to meet the needed investment in increased compensation benchmarks for pediatric sub-specialties . Children with kidney disease present added complexity given their age and the importance of growth and development along with other complex care needs. In the USA, the Centers for Medicare & Medicaid Services has recently recognized this complexity by providing enhanced reimbursement for pediatric chronic dialysis care through the use of a 30% add-on payment per treatment for pediatric dialysis patients . Broadening this approach for other high-intensity pediatric kidney disease services (such as advanced pre-dialysis chronic kidney disease and acute dialysis in the inpatient setting) should be considered. Group 5: What are the pathway considerations for growth of the pediatric nephrology workforce? Consensus statement 5a Stronger engagement of pediatric nephrologists with trainees throughout undergraduate medical education and during early pediatric residency may increase interest in a career in pediatric nephrology. Consensus statement 5b Flexibility in fellowship length and design with individualized pathways will encourage more residents to pursue pediatric nephrology, improve training experience, and potentially reduce the debt burden associated with the mandatory 3-year training. Consensus statement 5c Retention in the existing workforce may be improved by efforts of the ASPN towards incentivizing clinical and research work, improving work-life integration, and increasing remuneration. Rationale Multiple factors influence a medical student’s decision to choose pediatrics and a pediatric resident to choose pediatric nephrology, including exposure to the subject early on, perceived difficulty of the subject, having role models and mentors in pediatric nephrology, and consideration of lifestyle and earning potential . Pediatric nephrology divisions will benefit from dedicated faculty in the division who can intentionally work with trainees across all levels to improve exposure to the subject and provide positive role models for careers in the field. Pediatric sub-specialization is financially disincentivized for trainees, as pediatric sub-specialization both delays completion of training and decreases lifetime earning potential . Notably, this is not the case with adult sub-specialization . Additionally, the length of fellowship training and the rigid template requiring mandatory research and scholarly activity may be a deterrent for some trainees. The three-year training pathway also increases the debt burden of education and training, which combined with the relatively lower salaries leads to significant loss of earning potential . In a survey of almost 800 physicians in their second or third year of pediatric sub-specialty fellowship in the USA in 2007, 52% ( n = 390) would have chosen a 2-year fellowship with less research or scholarly activity . More recently, in another survey by the American Association of Pediatrics, almost 1500 fellows responded in favor of reducing the training duration to less than 3 years, or having a shorter duration track for those who planned to pursue a clinical path, and a longer one for those pursuing research . The NASEM Report recommended that the ACGME and American Board of Pediatrics develop and evaluate alternative fellowship training requirements and pathways, including a 2-year option for those who wish to pursue a clinically-focused career. ASPN, with its in-depth understanding of the challenges facing the pediatric nephrology workforce, needs to be a part of this restructuring . Longitudinal data to understand the impact of such a change on the composition of the workforce merits collection. Finally, a concerted effort at multiple levels is needed to understand the reasons for attrition from the pediatric nephrology workforce and to implement strategies to improve retention . This includes incentivizing fellows to complete pediatric nephrology training utilizing loan repayment plans and visa sponsorships for international medical graduates. This also includes other interventions to optimize work-life integration, like flexible work schedules, utilization of telehealth for urgent after-hours dialysis initiation, increased engagement of advanced practice providers, and working effectively with general pediatric practitioners to improve referral guidelines to pediatric nephrology, and thus share the workload . Efforts to streamline maintenance of certification may also reduce attrition for nephrologists who may otherwise consider staying in the workforce longer. Finally, the pediatric neprhrology community may benefit from collecting data from nephrologists who decide to leave the workforce earlier than anticipated. Consensus statement 1a Clinical full time equivalent (cFTE) includes all billable and non-billable activities related to providing high-quality clinical care for children with kidney disease. Consensus statement 1b Each pediatric nephrology program determines the appropriate makeup of inpatient and outpatient work that best suits their specific patient population and clinical mission and balances the priorities of providing safe and effective care with workforce equity and well-being. Rationale The variability of the clinical work of pediatric nephrologists in different hospital systems renders it difficult to quantify and standardize cFTE using typical calculations (i.e., shifts and clinics) . The work performed by a pediatric nephrologist may include procedural and cognitive components, inpatient coverage and outpatient clinics, and overnight call with potential for life-saving emergency procedures. In addition, the high medical complexity of pediatric nephrology patients requires multidisciplinary collaboration, attention to primary and preventive care designed to slow the progression of kidney disease, and frequent detailed patient and family conversations to ensure sufficient understanding of their child’s disease. The 24-h call coverage entails significant after-hours physician input, often with provision of emergent dialysis which requires physician presence during treatment and decision-making about organ suitability for pediatric transplant candidates. The relative lack of compensation proportional to the perceived workload has been identified as an important root cause of the pediatric nephrology workforce crisis . We recommend that a comprehensive analysis be performed to describe the time and effort required for a discrete block of clinical work that encompasses both inpatient and outpatient responsibilities. This analysis would holistically evaluate the work required for a standard 4-h half-day outpatient pediatric nephrology clinic. Specific measures for outpatient analysis would consist of time spent during direct, face-to-face patient interaction, and the additional workload outside the exam room relevant to patient care, including clinic preparation time, order entry, post-clinic documentation, and laboratory/imaging management. A similar analysis can be performed for inpatient pediatric nephrology service coverage. Specific measures for inpatient analysis would include the time spent during direct, face-to-face patient interaction, documentation with laboratory/imaging management, hand-off communication, and after-hours call burden including frequency of transplant organ offer calls for patients awaiting kidney transplantation, and emergent dialysis which requires physician presence during treatment. Data obtained from these analyses could be compared to adult nephrology to better understand the relative workload. A pilot study is also proposed using electronic health record (EHR) data analytics and self-report, as well as time-motion analysis, to collect data in granular detail. Additionally, we recommend collecting work data related to key clinical leadership and/or administrative roles (Dialysis Medical Director, Transplant Medical Director, Acute Care Nephrology Director, etc.) which should be included in the FTE description. The committee recognizes that the practice of pediatric nephrology varies broadly across institutions and regions. Important programmatic variables that can impact workload include the number of practicing nephrologists at the program, presence of fellows, residents, and/or advanced practice providers, catchment area and population size, hospital volumes and case mix index, presence/availability of pediatric dialysis and kidney transplantation, local resources, and presence of multidisciplinary programs that require pediatric nephrology expertise (i.e., level 1 Trauma designation, solid organ and bone marrow transplant programs, high-risk obstetric delivery services and level 1 NICU, kidney transplant volume, and the size of the outpatient peritoneal and hemodialysis populations). Given the broad variability that can exist between programs, we caution against the use of benchmarking metrics to define clinical work . In addition, an attempt to determine a universal 50th percentile RVU:cFTE benchmark may perpetuate a “race to the bottom” in which clinicians would be incentivized to spend less time per patient than their peers and may ultimately degrade the quality of care provided . Instead, we propose that individual pediatric nephrology programs use available data to perform detailed and transparent internal work analysis specific to their program and clinical needs. A “one-minus” model could be utilized, in which basic principles describe time allocation to clinical, research, teaching, and administrative activities which sum up to 1.0 FTE . This methodology could subsequently be used to create division-specific worksheets to determine cFTE components which are equitable, fair, and transparent. Example templates could be created by professional societies as “starting points” for small, medium, and large-sized programs through the use of the detailed time-work analyses while recognizing that adaptation of worksheets to fit the needs of the local environment is key to creating a sustainable model for all parties involved. Clinical full time equivalent (cFTE) includes all billable and non-billable activities related to providing high-quality clinical care for children with kidney disease. Each pediatric nephrology program determines the appropriate makeup of inpatient and outpatient work that best suits their specific patient population and clinical mission and balances the priorities of providing safe and effective care with workforce equity and well-being. Rationale The variability of the clinical work of pediatric nephrologists in different hospital systems renders it difficult to quantify and standardize cFTE using typical calculations (i.e., shifts and clinics) . The work performed by a pediatric nephrologist may include procedural and cognitive components, inpatient coverage and outpatient clinics, and overnight call with potential for life-saving emergency procedures. In addition, the high medical complexity of pediatric nephrology patients requires multidisciplinary collaboration, attention to primary and preventive care designed to slow the progression of kidney disease, and frequent detailed patient and family conversations to ensure sufficient understanding of their child’s disease. The 24-h call coverage entails significant after-hours physician input, often with provision of emergent dialysis which requires physician presence during treatment and decision-making about organ suitability for pediatric transplant candidates. The relative lack of compensation proportional to the perceived workload has been identified as an important root cause of the pediatric nephrology workforce crisis . We recommend that a comprehensive analysis be performed to describe the time and effort required for a discrete block of clinical work that encompasses both inpatient and outpatient responsibilities. This analysis would holistically evaluate the work required for a standard 4-h half-day outpatient pediatric nephrology clinic. Specific measures for outpatient analysis would consist of time spent during direct, face-to-face patient interaction, and the additional workload outside the exam room relevant to patient care, including clinic preparation time, order entry, post-clinic documentation, and laboratory/imaging management. A similar analysis can be performed for inpatient pediatric nephrology service coverage. Specific measures for inpatient analysis would include the time spent during direct, face-to-face patient interaction, documentation with laboratory/imaging management, hand-off communication, and after-hours call burden including frequency of transplant organ offer calls for patients awaiting kidney transplantation, and emergent dialysis which requires physician presence during treatment. Data obtained from these analyses could be compared to adult nephrology to better understand the relative workload. A pilot study is also proposed using electronic health record (EHR) data analytics and self-report, as well as time-motion analysis, to collect data in granular detail. Additionally, we recommend collecting work data related to key clinical leadership and/or administrative roles (Dialysis Medical Director, Transplant Medical Director, Acute Care Nephrology Director, etc.) which should be included in the FTE description. The committee recognizes that the practice of pediatric nephrology varies broadly across institutions and regions. Important programmatic variables that can impact workload include the number of practicing nephrologists at the program, presence of fellows, residents, and/or advanced practice providers, catchment area and population size, hospital volumes and case mix index, presence/availability of pediatric dialysis and kidney transplantation, local resources, and presence of multidisciplinary programs that require pediatric nephrology expertise (i.e., level 1 Trauma designation, solid organ and bone marrow transplant programs, high-risk obstetric delivery services and level 1 NICU, kidney transplant volume, and the size of the outpatient peritoneal and hemodialysis populations). Given the broad variability that can exist between programs, we caution against the use of benchmarking metrics to define clinical work . In addition, an attempt to determine a universal 50th percentile RVU:cFTE benchmark may perpetuate a “race to the bottom” in which clinicians would be incentivized to spend less time per patient than their peers and may ultimately degrade the quality of care provided . Instead, we propose that individual pediatric nephrology programs use available data to perform detailed and transparent internal work analysis specific to their program and clinical needs. A “one-minus” model could be utilized, in which basic principles describe time allocation to clinical, research, teaching, and administrative activities which sum up to 1.0 FTE . This methodology could subsequently be used to create division-specific worksheets to determine cFTE components which are equitable, fair, and transparent. Example templates could be created by professional societies as “starting points” for small, medium, and large-sized programs through the use of the detailed time-work analyses while recognizing that adaptation of worksheets to fit the needs of the local environment is key to creating a sustainable model for all parties involved. The variability of the clinical work of pediatric nephrologists in different hospital systems renders it difficult to quantify and standardize cFTE using typical calculations (i.e., shifts and clinics) . The work performed by a pediatric nephrologist may include procedural and cognitive components, inpatient coverage and outpatient clinics, and overnight call with potential for life-saving emergency procedures. In addition, the high medical complexity of pediatric nephrology patients requires multidisciplinary collaboration, attention to primary and preventive care designed to slow the progression of kidney disease, and frequent detailed patient and family conversations to ensure sufficient understanding of their child’s disease. The 24-h call coverage entails significant after-hours physician input, often with provision of emergent dialysis which requires physician presence during treatment and decision-making about organ suitability for pediatric transplant candidates. The relative lack of compensation proportional to the perceived workload has been identified as an important root cause of the pediatric nephrology workforce crisis . We recommend that a comprehensive analysis be performed to describe the time and effort required for a discrete block of clinical work that encompasses both inpatient and outpatient responsibilities. This analysis would holistically evaluate the work required for a standard 4-h half-day outpatient pediatric nephrology clinic. Specific measures for outpatient analysis would consist of time spent during direct, face-to-face patient interaction, and the additional workload outside the exam room relevant to patient care, including clinic preparation time, order entry, post-clinic documentation, and laboratory/imaging management. A similar analysis can be performed for inpatient pediatric nephrology service coverage. Specific measures for inpatient analysis would include the time spent during direct, face-to-face patient interaction, documentation with laboratory/imaging management, hand-off communication, and after-hours call burden including frequency of transplant organ offer calls for patients awaiting kidney transplantation, and emergent dialysis which requires physician presence during treatment. Data obtained from these analyses could be compared to adult nephrology to better understand the relative workload. A pilot study is also proposed using electronic health record (EHR) data analytics and self-report, as well as time-motion analysis, to collect data in granular detail. Additionally, we recommend collecting work data related to key clinical leadership and/or administrative roles (Dialysis Medical Director, Transplant Medical Director, Acute Care Nephrology Director, etc.) which should be included in the FTE description. The committee recognizes that the practice of pediatric nephrology varies broadly across institutions and regions. Important programmatic variables that can impact workload include the number of practicing nephrologists at the program, presence of fellows, residents, and/or advanced practice providers, catchment area and population size, hospital volumes and case mix index, presence/availability of pediatric dialysis and kidney transplantation, local resources, and presence of multidisciplinary programs that require pediatric nephrology expertise (i.e., level 1 Trauma designation, solid organ and bone marrow transplant programs, high-risk obstetric delivery services and level 1 NICU, kidney transplant volume, and the size of the outpatient peritoneal and hemodialysis populations). Given the broad variability that can exist between programs, we caution against the use of benchmarking metrics to define clinical work . In addition, an attempt to determine a universal 50th percentile RVU:cFTE benchmark may perpetuate a “race to the bottom” in which clinicians would be incentivized to spend less time per patient than their peers and may ultimately degrade the quality of care provided . Instead, we propose that individual pediatric nephrology programs use available data to perform detailed and transparent internal work analysis specific to their program and clinical needs. A “one-minus” model could be utilized, in which basic principles describe time allocation to clinical, research, teaching, and administrative activities which sum up to 1.0 FTE . This methodology could subsequently be used to create division-specific worksheets to determine cFTE components which are equitable, fair, and transparent. Example templates could be created by professional societies as “starting points” for small, medium, and large-sized programs through the use of the detailed time-work analyses while recognizing that adaptation of worksheets to fit the needs of the local environment is key to creating a sustainable model for all parties involved. Consensus statement 2a The effort pediatric nephrologists spend on academic non-revenue generating pursuits including educational, research, and administrative activities can be quantified and factored into determining their available capacity for providing clinical care. Consensus statement 2b A standardized rubric to track achievements in non-clinical academic activities in the areas of education, research, quality improvement, administrative leadership, and division citizenship provides a fair and consistent approach to incentive compensation, if applicable. Rationale Academic physicians routinely dedicate effort to non-revenue generating activities beyond direct patient care. For pediatric nephrologists who practice in academic medical centers, this is an implied expectation to advance core institutional missions including clinical, research, educational, and advocacy goals . While patient care clinical activities are quantified by well-established Current Procedural Terminology (CPT®) codes , compensation for academic pursuits varies . Certain aspects of the pediatric nephrologist’s academic endeavors may be linked to financial compensation and/or protected time. Those include governmental or foundational grant-supported research activities, Accreditation Council for Graduate Medical Education (ACGME)-accredited fellowship program director role and dialysis medical director roles . However, most academic activities are completed at the physician’s own discretion including clinical research activities, mentoring, resident and medical student education, participation in quality improvement projects, and other division, hospital level, or organizational committee leadership roles. Failure to recognize the effort physicians invest into these non-clinical efforts risks physician burnout, job dissatisfaction, and subsequent attrition from the field . While physician compensation models vary and are institution-specific, many may include an end of year incentive payment model that rewards physician clinical productivity . Failure to adapt incentive payment models to quantify and reward academic efforts and move away from purely clinical RVU-based metrics risks stifling academic innovation by shifting physician behavior to focus on clinical revenue-generating patient activities over academic endeavors . The effort pediatric nephrologists spend on academic non-revenue generating pursuits including educational, research, and administrative activities can be quantified and factored into determining their available capacity for providing clinical care. A standardized rubric to track achievements in non-clinical academic activities in the areas of education, research, quality improvement, administrative leadership, and division citizenship provides a fair and consistent approach to incentive compensation, if applicable. Rationale Academic physicians routinely dedicate effort to non-revenue generating activities beyond direct patient care. For pediatric nephrologists who practice in academic medical centers, this is an implied expectation to advance core institutional missions including clinical, research, educational, and advocacy goals . While patient care clinical activities are quantified by well-established Current Procedural Terminology (CPT®) codes , compensation for academic pursuits varies . Certain aspects of the pediatric nephrologist’s academic endeavors may be linked to financial compensation and/or protected time. Those include governmental or foundational grant-supported research activities, Accreditation Council for Graduate Medical Education (ACGME)-accredited fellowship program director role and dialysis medical director roles . However, most academic activities are completed at the physician’s own discretion including clinical research activities, mentoring, resident and medical student education, participation in quality improvement projects, and other division, hospital level, or organizational committee leadership roles. Failure to recognize the effort physicians invest into these non-clinical efforts risks physician burnout, job dissatisfaction, and subsequent attrition from the field . While physician compensation models vary and are institution-specific, many may include an end of year incentive payment model that rewards physician clinical productivity . Failure to adapt incentive payment models to quantify and reward academic efforts and move away from purely clinical RVU-based metrics risks stifling academic innovation by shifting physician behavior to focus on clinical revenue-generating patient activities over academic endeavors . Academic physicians routinely dedicate effort to non-revenue generating activities beyond direct patient care. For pediatric nephrologists who practice in academic medical centers, this is an implied expectation to advance core institutional missions including clinical, research, educational, and advocacy goals . While patient care clinical activities are quantified by well-established Current Procedural Terminology (CPT®) codes , compensation for academic pursuits varies . Certain aspects of the pediatric nephrologist’s academic endeavors may be linked to financial compensation and/or protected time. Those include governmental or foundational grant-supported research activities, Accreditation Council for Graduate Medical Education (ACGME)-accredited fellowship program director role and dialysis medical director roles . However, most academic activities are completed at the physician’s own discretion including clinical research activities, mentoring, resident and medical student education, participation in quality improvement projects, and other division, hospital level, or organizational committee leadership roles. Failure to recognize the effort physicians invest into these non-clinical efforts risks physician burnout, job dissatisfaction, and subsequent attrition from the field . While physician compensation models vary and are institution-specific, many may include an end of year incentive payment model that rewards physician clinical productivity . Failure to adapt incentive payment models to quantify and reward academic efforts and move away from purely clinical RVU-based metrics risks stifling academic innovation by shifting physician behavior to focus on clinical revenue-generating patient activities over academic endeavors . Consensus statement 3a Pediatric nephrologists contribute to institutional financial margins in ways that are separate from work RVUs (wRVUs), thus the wRVU system undervalues the effort and indirect income generated by pediatric nephrologists. Consensus statement 3b Availability of a pediatric nephrologist is a prerequisite to many of the high-value medical services offered by institutions. Consensus statement 3c As we move towards quality- and value-based care models, pediatric nephrologists will play a critical role in the financial well-being of medical institutions. Rationale Current wRVU metrics used to estimate the financial value of pediatric nephrologists to their institutions are flawed. The work of pediatric nephrologists, like many less procedurally oriented specialties, is undervalued by the wRVU system . While caring for their primary and consult patients, pediatric nephrologists generate orders and referrals for laboratory testing, medical imaging, surgical procedures, and sub-specialty consultation — none of which are captured by wRVU . Pediatric nephrologists enable institutions to offer a diverse array of medical services. This is especially true for high-margin service lines such as neonatal intensive care, cardiac surgery, solid organ transplant, and oncology/bone marrow transplantation. Furthermore, the care of critically ill children has been incentivized for institutions due to these high-margin service lines. While pediatric nephrologists perform critical roles in the care of these patients (e.g., dialysis procedures or care after organ transplantation), the downstream revenue supports the primary services (critical care) much more than consulting services. Regulatory bodies, accreditation entities, society guidelines, and quality metrics such as the US News World Report rankings track the availability of pediatric nephrology services and kidney replacement therapy programs to determine designations for clinical services at the highest level of care. As more payers move to value-based reimbursement, the increase in cost-savings and efficiency provided by pediatric nephrologists should be recognized . Furthermore, institutions invested in pediatric care would benefit from a heightened awareness of the financial repercussions that may result if the shortage of pediatric nephrologists continues to worsen . In the outpatient setting, the shortage of pediatric nephrologists has resulted in long travel distances for many patients to obtain pediatric nephrology care. This has led to an increase in outreach clinics to better serve the community; however, such clinics place a burden on the workforce in the form of significant travel time, time away from family, and working in clinic environments that may not be able to provide the same level of service as the main practice clinic (e.g., urine microscopy). Pediatric nephrologists contribute to institutional financial margins in ways that are separate from work RVUs (wRVUs), thus the wRVU system undervalues the effort and indirect income generated by pediatric nephrologists. Availability of a pediatric nephrologist is a prerequisite to many of the high-value medical services offered by institutions. As we move towards quality- and value-based care models, pediatric nephrologists will play a critical role in the financial well-being of medical institutions. Rationale Current wRVU metrics used to estimate the financial value of pediatric nephrologists to their institutions are flawed. The work of pediatric nephrologists, like many less procedurally oriented specialties, is undervalued by the wRVU system . While caring for their primary and consult patients, pediatric nephrologists generate orders and referrals for laboratory testing, medical imaging, surgical procedures, and sub-specialty consultation — none of which are captured by wRVU . Pediatric nephrologists enable institutions to offer a diverse array of medical services. This is especially true for high-margin service lines such as neonatal intensive care, cardiac surgery, solid organ transplant, and oncology/bone marrow transplantation. Furthermore, the care of critically ill children has been incentivized for institutions due to these high-margin service lines. While pediatric nephrologists perform critical roles in the care of these patients (e.g., dialysis procedures or care after organ transplantation), the downstream revenue supports the primary services (critical care) much more than consulting services. Regulatory bodies, accreditation entities, society guidelines, and quality metrics such as the US News World Report rankings track the availability of pediatric nephrology services and kidney replacement therapy programs to determine designations for clinical services at the highest level of care. As more payers move to value-based reimbursement, the increase in cost-savings and efficiency provided by pediatric nephrologists should be recognized . Furthermore, institutions invested in pediatric care would benefit from a heightened awareness of the financial repercussions that may result if the shortage of pediatric nephrologists continues to worsen . In the outpatient setting, the shortage of pediatric nephrologists has resulted in long travel distances for many patients to obtain pediatric nephrology care. This has led to an increase in outreach clinics to better serve the community; however, such clinics place a burden on the workforce in the form of significant travel time, time away from family, and working in clinic environments that may not be able to provide the same level of service as the main practice clinic (e.g., urine microscopy). Current wRVU metrics used to estimate the financial value of pediatric nephrologists to their institutions are flawed. The work of pediatric nephrologists, like many less procedurally oriented specialties, is undervalued by the wRVU system . While caring for their primary and consult patients, pediatric nephrologists generate orders and referrals for laboratory testing, medical imaging, surgical procedures, and sub-specialty consultation — none of which are captured by wRVU . Pediatric nephrologists enable institutions to offer a diverse array of medical services. This is especially true for high-margin service lines such as neonatal intensive care, cardiac surgery, solid organ transplant, and oncology/bone marrow transplantation. Furthermore, the care of critically ill children has been incentivized for institutions due to these high-margin service lines. While pediatric nephrologists perform critical roles in the care of these patients (e.g., dialysis procedures or care after organ transplantation), the downstream revenue supports the primary services (critical care) much more than consulting services. Regulatory bodies, accreditation entities, society guidelines, and quality metrics such as the US News World Report rankings track the availability of pediatric nephrology services and kidney replacement therapy programs to determine designations for clinical services at the highest level of care. As more payers move to value-based reimbursement, the increase in cost-savings and efficiency provided by pediatric nephrologists should be recognized . Furthermore, institutions invested in pediatric care would benefit from a heightened awareness of the financial repercussions that may result if the shortage of pediatric nephrologists continues to worsen . In the outpatient setting, the shortage of pediatric nephrologists has resulted in long travel distances for many patients to obtain pediatric nephrology care. This has led to an increase in outreach clinics to better serve the community; however, such clinics place a burden on the workforce in the form of significant travel time, time away from family, and working in clinic environments that may not be able to provide the same level of service as the main practice clinic (e.g., urine microscopy). Consensus statement 4a Compensation for pediatric nephrologists will represent the value of kidney care to their organization, including complexity of caring for patients across the spectrum of pediatric care. Further, compensation will reflect the value added by pediatric nephrologists in support of hospital and programmatic missions including margin positive services that require pediatric nephrology expertise. Consensus statement 4b Reimbursement for provision of sub-specialty care to children with kidney disease will accurately reflect the time and effort required to address their complex, multi-system disease manifestations, such as growth, development, and nutritional needs. In such a system, RVU would be adjusted to reflect the complexity and demands of care. This alignment is critical to prevent physician burnout and sustain the workforce. Rationale In the current fee-for-service system, care for children with kidney disease is neither sufficiently valued nor appropriately compensated . The lower compensation for high workload contributes to decreasing trainee interest in pediatric nephrology and affects recruitment and retention of under-represented minorities and non-financially advantaged individuals. Compensation for pediatric nephrologists should be both representative of the value of pediatric kidney care to their organization and reflective of their sub-speciality training. The recent National Academies of Science, Engineering, and Medicine (NASEM) Committee Report on the Pediatric Subspecialty Workforce and Its Impact on Child Health and Well-Being focused in part on the accurate reflection of the time and effort required to care for pediatric sub-specialty children . Future iterations of payment systems and reimbursement should reduce financial disincentives to sub-specialty training and consider the unique value added of pediatric nephrologists both to individual patient care and health systems. For example, high revenue generating programs such as critical care, stem cell transplantation, and cardiothoracic surgery all rely on the expertise of pediatric nephrologists and dialytic therapies . Proposed solutions include increased pediatric representation on agencies that determine current procedural terminology coding and reimbursement. Payment structure should move away from targeting set national benchmarking metrics (often creating a self-perpetuating cycle) and instead focus on value added. This will require the deliberate action by pediatric department chairs, children’s hospitals/health system chief executive officers, and medical college deans to meet the needed investment in increased compensation benchmarks for pediatric sub-specialties . Children with kidney disease present added complexity given their age and the importance of growth and development along with other complex care needs. In the USA, the Centers for Medicare & Medicaid Services has recently recognized this complexity by providing enhanced reimbursement for pediatric chronic dialysis care through the use of a 30% add-on payment per treatment for pediatric dialysis patients . Broadening this approach for other high-intensity pediatric kidney disease services (such as advanced pre-dialysis chronic kidney disease and acute dialysis in the inpatient setting) should be considered. Compensation for pediatric nephrologists will represent the value of kidney care to their organization, including complexity of caring for patients across the spectrum of pediatric care. Further, compensation will reflect the value added by pediatric nephrologists in support of hospital and programmatic missions including margin positive services that require pediatric nephrology expertise. Reimbursement for provision of sub-specialty care to children with kidney disease will accurately reflect the time and effort required to address their complex, multi-system disease manifestations, such as growth, development, and nutritional needs. In such a system, RVU would be adjusted to reflect the complexity and demands of care. This alignment is critical to prevent physician burnout and sustain the workforce. Rationale In the current fee-for-service system, care for children with kidney disease is neither sufficiently valued nor appropriately compensated . The lower compensation for high workload contributes to decreasing trainee interest in pediatric nephrology and affects recruitment and retention of under-represented minorities and non-financially advantaged individuals. Compensation for pediatric nephrologists should be both representative of the value of pediatric kidney care to their organization and reflective of their sub-speciality training. The recent National Academies of Science, Engineering, and Medicine (NASEM) Committee Report on the Pediatric Subspecialty Workforce and Its Impact on Child Health and Well-Being focused in part on the accurate reflection of the time and effort required to care for pediatric sub-specialty children . Future iterations of payment systems and reimbursement should reduce financial disincentives to sub-specialty training and consider the unique value added of pediatric nephrologists both to individual patient care and health systems. For example, high revenue generating programs such as critical care, stem cell transplantation, and cardiothoracic surgery all rely on the expertise of pediatric nephrologists and dialytic therapies . Proposed solutions include increased pediatric representation on agencies that determine current procedural terminology coding and reimbursement. Payment structure should move away from targeting set national benchmarking metrics (often creating a self-perpetuating cycle) and instead focus on value added. This will require the deliberate action by pediatric department chairs, children’s hospitals/health system chief executive officers, and medical college deans to meet the needed investment in increased compensation benchmarks for pediatric sub-specialties . Children with kidney disease present added complexity given their age and the importance of growth and development along with other complex care needs. In the USA, the Centers for Medicare & Medicaid Services has recently recognized this complexity by providing enhanced reimbursement for pediatric chronic dialysis care through the use of a 30% add-on payment per treatment for pediatric dialysis patients . Broadening this approach for other high-intensity pediatric kidney disease services (such as advanced pre-dialysis chronic kidney disease and acute dialysis in the inpatient setting) should be considered. In the current fee-for-service system, care for children with kidney disease is neither sufficiently valued nor appropriately compensated . The lower compensation for high workload contributes to decreasing trainee interest in pediatric nephrology and affects recruitment and retention of under-represented minorities and non-financially advantaged individuals. Compensation for pediatric nephrologists should be both representative of the value of pediatric kidney care to their organization and reflective of their sub-speciality training. The recent National Academies of Science, Engineering, and Medicine (NASEM) Committee Report on the Pediatric Subspecialty Workforce and Its Impact on Child Health and Well-Being focused in part on the accurate reflection of the time and effort required to care for pediatric sub-specialty children . Future iterations of payment systems and reimbursement should reduce financial disincentives to sub-specialty training and consider the unique value added of pediatric nephrologists both to individual patient care and health systems. For example, high revenue generating programs such as critical care, stem cell transplantation, and cardiothoracic surgery all rely on the expertise of pediatric nephrologists and dialytic therapies . Proposed solutions include increased pediatric representation on agencies that determine current procedural terminology coding and reimbursement. Payment structure should move away from targeting set national benchmarking metrics (often creating a self-perpetuating cycle) and instead focus on value added. This will require the deliberate action by pediatric department chairs, children’s hospitals/health system chief executive officers, and medical college deans to meet the needed investment in increased compensation benchmarks for pediatric sub-specialties . Children with kidney disease present added complexity given their age and the importance of growth and development along with other complex care needs. In the USA, the Centers for Medicare & Medicaid Services has recently recognized this complexity by providing enhanced reimbursement for pediatric chronic dialysis care through the use of a 30% add-on payment per treatment for pediatric dialysis patients . Broadening this approach for other high-intensity pediatric kidney disease services (such as advanced pre-dialysis chronic kidney disease and acute dialysis in the inpatient setting) should be considered. Consensus statement 5a Stronger engagement of pediatric nephrologists with trainees throughout undergraduate medical education and during early pediatric residency may increase interest in a career in pediatric nephrology. Consensus statement 5b Flexibility in fellowship length and design with individualized pathways will encourage more residents to pursue pediatric nephrology, improve training experience, and potentially reduce the debt burden associated with the mandatory 3-year training. Consensus statement 5c Retention in the existing workforce may be improved by efforts of the ASPN towards incentivizing clinical and research work, improving work-life integration, and increasing remuneration. Rationale Multiple factors influence a medical student’s decision to choose pediatrics and a pediatric resident to choose pediatric nephrology, including exposure to the subject early on, perceived difficulty of the subject, having role models and mentors in pediatric nephrology, and consideration of lifestyle and earning potential . Pediatric nephrology divisions will benefit from dedicated faculty in the division who can intentionally work with trainees across all levels to improve exposure to the subject and provide positive role models for careers in the field. Pediatric sub-specialization is financially disincentivized for trainees, as pediatric sub-specialization both delays completion of training and decreases lifetime earning potential . Notably, this is not the case with adult sub-specialization . Additionally, the length of fellowship training and the rigid template requiring mandatory research and scholarly activity may be a deterrent for some trainees. The three-year training pathway also increases the debt burden of education and training, which combined with the relatively lower salaries leads to significant loss of earning potential . In a survey of almost 800 physicians in their second or third year of pediatric sub-specialty fellowship in the USA in 2007, 52% ( n = 390) would have chosen a 2-year fellowship with less research or scholarly activity . More recently, in another survey by the American Association of Pediatrics, almost 1500 fellows responded in favor of reducing the training duration to less than 3 years, or having a shorter duration track for those who planned to pursue a clinical path, and a longer one for those pursuing research . The NASEM Report recommended that the ACGME and American Board of Pediatrics develop and evaluate alternative fellowship training requirements and pathways, including a 2-year option for those who wish to pursue a clinically-focused career. ASPN, with its in-depth understanding of the challenges facing the pediatric nephrology workforce, needs to be a part of this restructuring . Longitudinal data to understand the impact of such a change on the composition of the workforce merits collection. Finally, a concerted effort at multiple levels is needed to understand the reasons for attrition from the pediatric nephrology workforce and to implement strategies to improve retention . This includes incentivizing fellows to complete pediatric nephrology training utilizing loan repayment plans and visa sponsorships for international medical graduates. This also includes other interventions to optimize work-life integration, like flexible work schedules, utilization of telehealth for urgent after-hours dialysis initiation, increased engagement of advanced practice providers, and working effectively with general pediatric practitioners to improve referral guidelines to pediatric nephrology, and thus share the workload . Efforts to streamline maintenance of certification may also reduce attrition for nephrologists who may otherwise consider staying in the workforce longer. Finally, the pediatric neprhrology community may benefit from collecting data from nephrologists who decide to leave the workforce earlier than anticipated. Stronger engagement of pediatric nephrologists with trainees throughout undergraduate medical education and during early pediatric residency may increase interest in a career in pediatric nephrology. Flexibility in fellowship length and design with individualized pathways will encourage more residents to pursue pediatric nephrology, improve training experience, and potentially reduce the debt burden associated with the mandatory 3-year training. Retention in the existing workforce may be improved by efforts of the ASPN towards incentivizing clinical and research work, improving work-life integration, and increasing remuneration. Rationale Multiple factors influence a medical student’s decision to choose pediatrics and a pediatric resident to choose pediatric nephrology, including exposure to the subject early on, perceived difficulty of the subject, having role models and mentors in pediatric nephrology, and consideration of lifestyle and earning potential . Pediatric nephrology divisions will benefit from dedicated faculty in the division who can intentionally work with trainees across all levels to improve exposure to the subject and provide positive role models for careers in the field. Pediatric sub-specialization is financially disincentivized for trainees, as pediatric sub-specialization both delays completion of training and decreases lifetime earning potential . Notably, this is not the case with adult sub-specialization . Additionally, the length of fellowship training and the rigid template requiring mandatory research and scholarly activity may be a deterrent for some trainees. The three-year training pathway also increases the debt burden of education and training, which combined with the relatively lower salaries leads to significant loss of earning potential . In a survey of almost 800 physicians in their second or third year of pediatric sub-specialty fellowship in the USA in 2007, 52% ( n = 390) would have chosen a 2-year fellowship with less research or scholarly activity . More recently, in another survey by the American Association of Pediatrics, almost 1500 fellows responded in favor of reducing the training duration to less than 3 years, or having a shorter duration track for those who planned to pursue a clinical path, and a longer one for those pursuing research . The NASEM Report recommended that the ACGME and American Board of Pediatrics develop and evaluate alternative fellowship training requirements and pathways, including a 2-year option for those who wish to pursue a clinically-focused career. ASPN, with its in-depth understanding of the challenges facing the pediatric nephrology workforce, needs to be a part of this restructuring . Longitudinal data to understand the impact of such a change on the composition of the workforce merits collection. Finally, a concerted effort at multiple levels is needed to understand the reasons for attrition from the pediatric nephrology workforce and to implement strategies to improve retention . This includes incentivizing fellows to complete pediatric nephrology training utilizing loan repayment plans and visa sponsorships for international medical graduates. This also includes other interventions to optimize work-life integration, like flexible work schedules, utilization of telehealth for urgent after-hours dialysis initiation, increased engagement of advanced practice providers, and working effectively with general pediatric practitioners to improve referral guidelines to pediatric nephrology, and thus share the workload . Efforts to streamline maintenance of certification may also reduce attrition for nephrologists who may otherwise consider staying in the workforce longer. Finally, the pediatric neprhrology community may benefit from collecting data from nephrologists who decide to leave the workforce earlier than anticipated. Multiple factors influence a medical student’s decision to choose pediatrics and a pediatric resident to choose pediatric nephrology, including exposure to the subject early on, perceived difficulty of the subject, having role models and mentors in pediatric nephrology, and consideration of lifestyle and earning potential . Pediatric nephrology divisions will benefit from dedicated faculty in the division who can intentionally work with trainees across all levels to improve exposure to the subject and provide positive role models for careers in the field. Pediatric sub-specialization is financially disincentivized for trainees, as pediatric sub-specialization both delays completion of training and decreases lifetime earning potential . Notably, this is not the case with adult sub-specialization . Additionally, the length of fellowship training and the rigid template requiring mandatory research and scholarly activity may be a deterrent for some trainees. The three-year training pathway also increases the debt burden of education and training, which combined with the relatively lower salaries leads to significant loss of earning potential . In a survey of almost 800 physicians in their second or third year of pediatric sub-specialty fellowship in the USA in 2007, 52% ( n = 390) would have chosen a 2-year fellowship with less research or scholarly activity . More recently, in another survey by the American Association of Pediatrics, almost 1500 fellows responded in favor of reducing the training duration to less than 3 years, or having a shorter duration track for those who planned to pursue a clinical path, and a longer one for those pursuing research . The NASEM Report recommended that the ACGME and American Board of Pediatrics develop and evaluate alternative fellowship training requirements and pathways, including a 2-year option for those who wish to pursue a clinically-focused career. ASPN, with its in-depth understanding of the challenges facing the pediatric nephrology workforce, needs to be a part of this restructuring . Longitudinal data to understand the impact of such a change on the composition of the workforce merits collection. Finally, a concerted effort at multiple levels is needed to understand the reasons for attrition from the pediatric nephrology workforce and to implement strategies to improve retention . This includes incentivizing fellows to complete pediatric nephrology training utilizing loan repayment plans and visa sponsorships for international medical graduates. This also includes other interventions to optimize work-life integration, like flexible work schedules, utilization of telehealth for urgent after-hours dialysis initiation, increased engagement of advanced practice providers, and working effectively with general pediatric practitioners to improve referral guidelines to pediatric nephrology, and thus share the workload . Efforts to streamline maintenance of certification may also reduce attrition for nephrologists who may otherwise consider staying in the workforce longer. Finally, the pediatric neprhrology community may benefit from collecting data from nephrologists who decide to leave the workforce earlier than anticipated. While the faculty who participated in the Summit represent a diverse group of pediatric nephrologists, we note that a key limitation was that some of the issues addressed are specific to providers practicing in the USA, with associated unique reimbursement issues. However, most topic areas have broader implications and many of these topics also apply to other pediatric sub-specialties . We therefore anticipate that our consensus recommendations may prove useful to the larger international pediatric community. The Workforce Summit 2.0 consensus statements summarize the key issues facing the pediatric nephrology workforce and serve to guide next steps the community may take to strengthen the ability of our community to care for children with kidney disease. Table summarizes the consensus statements from each working group and associated action items. The working group focused on defining a 1.0 cFTE proposes to undertake an observational study in which pediatric nephrologists report the billable and non-billable work that goes into a half-day outpatient nephrology clinic and a week of inpatient nephrology service. The group focused on academic RVU’s proposes to create a rubric that will credit providers with the non-clinical work they perform in support of the academic mission. The working group focused on describing the institutional value of a pediatric nephrology program has proposed a White paper summarizing the critical role that a robust nephrology program provides to other, more incentivized, service lines. This group also intends to demonstrate the potential cost to institutions if they were unable to maintain a pediatric nephrology service. The working group focused on salary equity proposes increased transparency on salary within the ASPN community and intends to post salary metrics on the members’ webpage. And building on the recent reimbursement victory for children with kidney failure, the group will work with other sub-specialties to demonstrate that children who require sub-specialty care are more complex than their adult counterparts, and warrant increased reimbursement based upon the higher level of complexity. And lastly, the working group focused on recruitment and retention of the workforce plans to establish an exit interview system for pediatric nephrologists who leave the workforce early and will continue to advocate for flexibility in the duration of fellowship, depending on the career goals of the trainee. This group will also work to clarify standard operating procedures and referral plans between the primary care providers on the front lines and the pediatric nephrology community in order to streamline referrals. The purpose of this Summit was to create concrete steps for improvement in areas crucial to workforce recruitment, retention, and resiliency. These consensus statements outline key areas of focus to improve the sustainability of the pediatric nephrology workforce and many align with issues facing other pediatric sub-specialties. Concerted efforts along these lines may therefore help address workforce challenges not just within pediatric nephrology but also among other pediatric sub-specialties. Improving the strength and resiliency of the pediatric nephrology workforce — and pediatric sub-specialties in general — improves the value and care provided to children. Below is the link to the electronic supplementary material. Graphical Abstract (PPTX 553 KB) |
Molecular and MALDI-TOF MS identification of swallow bugs | b32f5619-6746-493a-af49-f5ba14e17aef | 8627032 | Pathology[mh] | The “true bugs” refer to the order Hemiptera, with > 42,000 species in 90 families worldwide . This order comprises insects, including predatory entomophagous insects that feed on insects and small invertebrates, phytophagous insects and three families that are strictly hematophagous . The Cimicidae family includes around 100 species grouped into 24 genera . This family can be differentiated from other Hemiptera by being flat in shape, ovoid, flightless and wingless . In Europe, the Cimicinae subfamily is the only one prevalent. It is represented by the genus Cimex , which includes seven species . Within the Cimex group, two cosmopolitan species, C. lectularius , the common bed bug, and C. hemipterus , the tropical bed bug, feed on human blood . Otherwise, Cimex columbarius , C. pipistrelli and C. dissimilis occasionally feed on human blood when their preferred hosts (bats and birds) are absent . Three common species are involved in swallow bug infestation: the North American swallow bug C. vicarius , which is an ectoparasite of the cliff swallow, rarely reported in the barn swallow and house sparrow ; Cimex hirundinis , which is found in Eurasia, exclusively common to house martin nests and other birds; Cimex montandoni , which is found specifically in Romania in sand martin nests . Cimex vicarius is the only known vector of Buggy Creek virus (BCRV; Togaviridae, Alphavirus), which causes western equine encephalitis . Another arbovirus, the strain responsible for Venezuelan equine encephalitis (Tonate virus), has also been isolated in C. vicarius . Under certain conditions, C. hirundinis are able to feed on human blood, and their bite is known to be more painful than that of bedbugs . However, except for experienced entomologists, it is challenging to make a morphological distinction between Cimex spp. and even other arthropods. In addition, the number of entomologists is declining, and adapted documentation is sometimes not accessible . The molecular approach has been assessed for its potential to overcome these limitations. Conversely, the molecular tool is relatively laborious, requires high-cost reagents and depends on both the availability of high-quality reference sequences in the GenBank database and the use of the correct gene fragment . Over the past decade, the matrix-assisted laser desorption/ionization time-of-flight mass spectrometry (MALDI-TOF MS) technique has widely revolutionized the clinical microbiology field . It has also emerged in medical entomology. MALDI-TOF MS has been shown to be rapid, reliable and notably inexpensive (as soon as the device is available) for identifying various species of arthropods . Recently, Benkacimi et al. showed that this innovative tool could be used as an alternative method to identify and discriminate between C. hemipterus and C. lectularius . Our study aimed to assess the ability of MALDI-TOF MS to identify swallow bugs collected from abandoned house martin nests in France. Molecular tools were also used to identify these swallow bugs and screen them for carriage of microorganisms. Swallow bugs sampling and morphological identification Five hundred swallow bugs were sampled from abandoned swallow nests in a house located in Toulouse (43°36′16″N, 1°26′38″E) in southwest France in July 2020 (Fig. a). The house martin swallows [ Delichon urbicum (Passeriformes, Hirundinidae)] built jug-shaped mud nests under the eaves of the house, represented in Fig. b. The sampling was conducted in highly infested nests (Fig. c–e). Abandoned nests were placed in plastic storage containers, carefully transported to the insectarium of Marseille and broken into small pieces to pick out the swallow bugs. The swallow bugs were harvested using forceps, counted and then stored at − 20 °C. The morphological identification to the species level was assessed and confirmed by an expert entomologist (JMB) using the identification keys . A VHX-7000 digital microscope (Kayence, Osaka, Japan) and electron microscope (SEM Hitachi TM4000 Plus) were used to photograph morphological details. For the analysis, the insect stage and species were codified on the tube. DNA extraction and molecular identification of swallow bugs DNA extraction was performed from the half body of each specimen using an EZ1 DNA Tissue Kit (Qiagen) following the same DNA extraction protocol as described by Benkacimi et al. . The swallow bug specimens (nymphs and adults) added into the MS reference database were subjected to standard PCR in an automated DNA thermal cycler (Applied Biosystems, 2720, Foster City, CA, USA) using Folmer’s universal COI (cytochrome oxidase subunit I) barcoding primers (LCO1490, HCO2198) targeting 710 base pairs . The thermocycler program used for the amplification of the COI was previously described by Benkacimi et al. . The sequences obtained were used to perform BLAST searches via the National Center for Biotechnology Information (NCBI) GenBank sequence and were then aligned using MEGA7 . A phylogenetic tree was constructed and edited using the maximum likelihood method with model selection determined by MEGA7 and Figtree 1.4.2, respectively . Statistical support for internal branches of the trees was evaluated by bootstrapping with 500 iterations. MALDITOF MS sample preparation for analysis Specimens of swallow bugs ( n = 115) were rinsed successively in 70% ethanol followed by two baths of distilled water and were dried on sterile filter paper . The heads of adults ( n = 65) and the cephalothoraces (head and thorax) of nymphs ( n = 50) were dissected under a Leica ES2 stereomicroscope 10×/30× using a new sterile blade. They were then immersed for 2.5 min in distilled water, rinsed with distilled water and immersed for 2 min in 200 μl of 70% formic acid and 200 μl of 50% acetonitrile. The dissected parts were dried on sterile filter paper for MALDI-TOF MS analysis . The remaining body parts were conserved at − 20 °C for molecular biology and supplementary analysis. The cephalothoraces were homogenized in 15 μl and the heads in 40 μl of the extraction solution (70% formic acid and 50% acetonitrile) using glass beads (1.0 mm diameter, BioSpec Products). All preparations were homogenized using the TissueLyser instrument (Qiagen, Germany). One microliter of the supernatant of the protein extract from each sample was spotted in quadruplicate on a MALDI-TOF MS steel target plate (Bruker Daltonics, Germany). The spots were left to dry and then covered with 1 μl of matrix solution composed of saturated α -cyano-4-hydroxycynnamic acid (Sigma, Lyon. France), 50% acetonitrile, 2.5% trifluoroacetic acid and HPLC-grade water . The target plate was dried at room temperature before being inserted into the MALDI-TOF MS instrument (Bruker Daltonics, Germany) for analysis. MALDI-TOF MS parameters Protein mass profiles were generated using a Microflex MALDI-TOF mass spectrometer (Bruker Daltonics) with Flex Control software (Bruker Daltonics), with parameters previously described . The profiles of the spectra obtained were viewed using Flex Analysis, version 3.3, and MALDI Biotyper, version 3.0, software and ClinProTools v.2.2 for data processing. MALDI-TOF MS analysis and reference database creation The reproducibility of the MS spectra generated from adult and nymph swallow bugs was visualized with Flex analysis v.3.3 and then exported to ClinProTools v.2.2 software packages (Bruker Daltonics, Germany) for data processing (smoothing, baseline subtraction) . Intra-species reproducibility and inter-species specificity were assessed by comparing and analyzing the spectral profiles obtained from the four spots of each individual specimen. Spectra of bad quality were excluded from the analysis [< 3000 arbitrary units (a.u.)]. An MS dendrogram was created using MALDI-Biotyper software v.3.0 to visualize the heterogeneity level of MS spectra from adult and nymph swallow bugs (hierarchical clustering of the mass spectra). Good-quality spectra (high peak intensity and reproducibility) were then added to our MALDI-TOF MS in-house database after being molecularly confirmed. Blind tests The blind test was performed (MALDI-Biotyper software v.3.0, Bruker Daltonics) using swallow bug specimens, with the exception of those used as MS reference spectra. The accuracy of species identification was evaluated using obtained log-score values (LSVs). This value can range from 0 to 3. The spectrum with the highest log score value [LSV] among the four spots was selected as a valid identification . Microorganism detection in swallow bugs DNA from 115 swallow bugs, including 65 adults and 50 nymphs, was screened by qPCR using primers and probes targeting specific sequences of the following bacterial pathogens: Rickettsia spp. ( RKND03 ), Borrelia spp. ( Bor 16S ), Bartonella spp. ( Barto ITS2 ) and Coxiella burnetii ( IS30A ), Anaplasmataceae spp. ( 23S rRNA ) . Positive samples for Anaplasmataceae spp. were then submitted to the qPCR system specific for detecting Wolbachia spp. . For each qPCR plate, negative (qPCR reaction mix without DNA) and positive (DNA from our laboratory cultures) controls were used. Four Wolbachia -positive samples were submitted to standard PCR targeting 438 base pair fragments of 16S rRNA and targeting 560 base pair fragments of the ftsZ gene (Table ) and sequencing to identify Wolbachia species. Phylogenetic analyses based on 16S rRNA and ftsZ gene sequences were performed using the maximum likelihood method and the model selected by MEGA7 . Statistical support for internal branches of the trees was assessed by bootstrapping with 1000 iterations (only bootstrap values ≥ 50 were retained). Five hundred swallow bugs were sampled from abandoned swallow nests in a house located in Toulouse (43°36′16″N, 1°26′38″E) in southwest France in July 2020 (Fig. a). The house martin swallows [ Delichon urbicum (Passeriformes, Hirundinidae)] built jug-shaped mud nests under the eaves of the house, represented in Fig. b. The sampling was conducted in highly infested nests (Fig. c–e). Abandoned nests were placed in plastic storage containers, carefully transported to the insectarium of Marseille and broken into small pieces to pick out the swallow bugs. The swallow bugs were harvested using forceps, counted and then stored at − 20 °C. The morphological identification to the species level was assessed and confirmed by an expert entomologist (JMB) using the identification keys . A VHX-7000 digital microscope (Kayence, Osaka, Japan) and electron microscope (SEM Hitachi TM4000 Plus) were used to photograph morphological details. For the analysis, the insect stage and species were codified on the tube. DNA extraction was performed from the half body of each specimen using an EZ1 DNA Tissue Kit (Qiagen) following the same DNA extraction protocol as described by Benkacimi et al. . The swallow bug specimens (nymphs and adults) added into the MS reference database were subjected to standard PCR in an automated DNA thermal cycler (Applied Biosystems, 2720, Foster City, CA, USA) using Folmer’s universal COI (cytochrome oxidase subunit I) barcoding primers (LCO1490, HCO2198) targeting 710 base pairs . The thermocycler program used for the amplification of the COI was previously described by Benkacimi et al. . The sequences obtained were used to perform BLAST searches via the National Center for Biotechnology Information (NCBI) GenBank sequence and were then aligned using MEGA7 . A phylogenetic tree was constructed and edited using the maximum likelihood method with model selection determined by MEGA7 and Figtree 1.4.2, respectively . Statistical support for internal branches of the trees was evaluated by bootstrapping with 500 iterations. Specimens of swallow bugs ( n = 115) were rinsed successively in 70% ethanol followed by two baths of distilled water and were dried on sterile filter paper . The heads of adults ( n = 65) and the cephalothoraces (head and thorax) of nymphs ( n = 50) were dissected under a Leica ES2 stereomicroscope 10×/30× using a new sterile blade. They were then immersed for 2.5 min in distilled water, rinsed with distilled water and immersed for 2 min in 200 μl of 70% formic acid and 200 μl of 50% acetonitrile. The dissected parts were dried on sterile filter paper for MALDI-TOF MS analysis . The remaining body parts were conserved at − 20 °C for molecular biology and supplementary analysis. The cephalothoraces were homogenized in 15 μl and the heads in 40 μl of the extraction solution (70% formic acid and 50% acetonitrile) using glass beads (1.0 mm diameter, BioSpec Products). All preparations were homogenized using the TissueLyser instrument (Qiagen, Germany). One microliter of the supernatant of the protein extract from each sample was spotted in quadruplicate on a MALDI-TOF MS steel target plate (Bruker Daltonics, Germany). The spots were left to dry and then covered with 1 μl of matrix solution composed of saturated α -cyano-4-hydroxycynnamic acid (Sigma, Lyon. France), 50% acetonitrile, 2.5% trifluoroacetic acid and HPLC-grade water . The target plate was dried at room temperature before being inserted into the MALDI-TOF MS instrument (Bruker Daltonics, Germany) for analysis. Protein mass profiles were generated using a Microflex MALDI-TOF mass spectrometer (Bruker Daltonics) with Flex Control software (Bruker Daltonics), with parameters previously described . The profiles of the spectra obtained were viewed using Flex Analysis, version 3.3, and MALDI Biotyper, version 3.0, software and ClinProTools v.2.2 for data processing. The reproducibility of the MS spectra generated from adult and nymph swallow bugs was visualized with Flex analysis v.3.3 and then exported to ClinProTools v.2.2 software packages (Bruker Daltonics, Germany) for data processing (smoothing, baseline subtraction) . Intra-species reproducibility and inter-species specificity were assessed by comparing and analyzing the spectral profiles obtained from the four spots of each individual specimen. Spectra of bad quality were excluded from the analysis [< 3000 arbitrary units (a.u.)]. An MS dendrogram was created using MALDI-Biotyper software v.3.0 to visualize the heterogeneity level of MS spectra from adult and nymph swallow bugs (hierarchical clustering of the mass spectra). Good-quality spectra (high peak intensity and reproducibility) were then added to our MALDI-TOF MS in-house database after being molecularly confirmed. The blind test was performed (MALDI-Biotyper software v.3.0, Bruker Daltonics) using swallow bug specimens, with the exception of those used as MS reference spectra. The accuracy of species identification was evaluated using obtained log-score values (LSVs). This value can range from 0 to 3. The spectrum with the highest log score value [LSV] among the four spots was selected as a valid identification . DNA from 115 swallow bugs, including 65 adults and 50 nymphs, was screened by qPCR using primers and probes targeting specific sequences of the following bacterial pathogens: Rickettsia spp. ( RKND03 ), Borrelia spp. ( Bor 16S ), Bartonella spp. ( Barto ITS2 ) and Coxiella burnetii ( IS30A ), Anaplasmataceae spp. ( 23S rRNA ) . Positive samples for Anaplasmataceae spp. were then submitted to the qPCR system specific for detecting Wolbachia spp. . For each qPCR plate, negative (qPCR reaction mix without DNA) and positive (DNA from our laboratory cultures) controls were used. Four Wolbachia -positive samples were submitted to standard PCR targeting 438 base pair fragments of 16S rRNA and targeting 560 base pair fragments of the ftsZ gene (Table ) and sequencing to identify Wolbachia species. Phylogenetic analyses based on 16S rRNA and ftsZ gene sequences were performed using the maximum likelihood method and the model selected by MEGA7 . Statistical support for internal branches of the trees was assessed by bootstrapping with 1000 iterations (only bootstrap values ≥ 50 were retained). Morphological characterization In total, 550 bugs were picked out from abandoned nests: 377 adults and 173 nymphs. Adult specimens were morphologically identified as C. hirundinis . They were characterized by the presence of long, pale bristles and less protruding eyes (Fig. a and b). Compared to our laboratory-reared bed bugs, they were smaller and more pubescent (Fig. a), and the anterior lobes of the pronotum (Fig. b) were moderately developed compared to the bed bug pronotums (Fig. c and d). The scanning electron microscope analysis revealed a detailed visualization of the species. The pronotum of C. hirundinis (Fig. j) compared to C. lectularius and C. hemipterus is remarkably less concave. At the pronotum sides, the bristles of C. hirundinis (Fig. i) are fine, longer and more numerous than the bed bug bristles (Fig. e and g), including on the whole-body surface (Fig. a and b). The C. lectularius pronotum bristle shape (forked-sharpened, showing jagged crowns) appeared to be identical to C. hirundinis pronotum bristles, but they were shorter and thicker (Fig. g and i). However, C. hemipterus bristles seemed to be smoother and not jagged (Fig. e). The C. hirundinis male intromittent genital organ is illustrated in Additional file : Figure S1a and c and the female paragenital sinus in Additional file : Figure S1b and d. Concerning the nymphs, we grouped the nymphs into stage 2, stage 3 and stage 4 based on body size (Additional file : Figure S2). Molecular identification of swallow bugs The morphological identification of C. hirundinis was confirmed by molecular tools. Five sequences of adults and another four sequences of nymphs were successfully obtained using the COI gene. NCBI BLAST analysis of COI sequences from nine specimens of C. hirundinis added to our MALDI-TOF MS in-house database revealed that they were 98.66–99.12% identical to Oeciacus hirundinis (GenBank accession nos. MG596808-GU985544) (Table ). To see the position of the obtained sequences among the GenBank COI sequences, a phylogenetic tree was constructed on the basis of COI fragment sequences. The phylogenetic tree showed that the sequence of C. hirundinis clustered with the sequence of O. hirundinis deposited in the GenBank and grouped in the Cimicinae subfamily (Fig. ). The sequences of the COI gene of C. hirundinis were deposited in the GenBank. The FASTA of the COI sequence is attached in Additional file : Dataset S1. Comparison of C. hirundinis adults and nymphs Sixty-five specimens of adults and 50 nymphs (stage 2, 3 and 4), randomly selected, were subjected to MALDI-TOF MS to identify the specimens and assess the reproducibility as well as the specificity of MS spectra. Spectral profile analysis using Flex analysis software showed that 62/65 (95.38%) of C. hirundinis at the adult stage and 50/50 (100%) nymphs provided good-quality MS spectra (Table ) (Fig. a). The principal component analysis using ClinProTools v.2.2 software revealed a clear distinction between nymphal and adult stages (Fig. b). The results were confirmed by the MS dendrogram (Fig. a) using MALDI-Biotyper 3.0. The MS protein profiles of both stages revealed sufficient discrimination between the MS spectra of adults and nymphs. However, clustering was not obtained for the nymphal stages (Fig. a). MS identification of C. hirundinis adults and nymphs To validate the species identification by MALDI-TOF MS, five high-quality spectra of the adult stage and four spectra of the nymphal stage were randomly selected for database creation using MALDI-Biotyper 3.0 (Table ). A blind test against our MS in-house database was carried out using the remaining spectra. The results showed that 100% (57/57) adult stage specimens were correctly identified with LSVs between 1.922 and 2.518. Cimex hirundinis nymphs were accurately identified, 100% (46/46), at the species level with LSVs between 2.188 and 2.665 (Table ). All log-score values obtained for adults and nymphs are represented in Fig. b. For nymphs, the LSV mean was 2.461 ± 0.100 and the median was 2.476. For adults, the LSV mean was 2.328 ± 0.143 and the median was 2.363. Microorganism screening One hundred fifteen specimens of C. hirundinis , including 65 adults and 50 nymphs, were first screened for the detection of microorganisms by qPCR. Of the C. hirundinis specimens, 99% (113/115) tested positive for Wolbachia spp. ( 23S ) (Table ). Four sequences of 16S obtained from specimens positive for Wolbachia were identical to each other and showed 100% homology with the sequence of Wolbachia massiliensis isolated in C. hemipterus collected in Senegal (GenBank accession no. CP061738). Similarly, the NCBI BLAST analysis of ftsZ sequences revealed that the sequences were 100% identical to the sequence of W. massiliensis (GenBank accession no. CP061738) (Table ). The phylogenetic analyses using the maximum likelihood method showed that the obtained sequences belonged to the new T supergroup and clustered with W. massiliensis isolated from C. hemipterus for both genes (Fig. a and b). No Bartonella spp., Rickettsia spp., Borrelia spp. and C. burnetii were detected. In total, 550 bugs were picked out from abandoned nests: 377 adults and 173 nymphs. Adult specimens were morphologically identified as C. hirundinis . They were characterized by the presence of long, pale bristles and less protruding eyes (Fig. a and b). Compared to our laboratory-reared bed bugs, they were smaller and more pubescent (Fig. a), and the anterior lobes of the pronotum (Fig. b) were moderately developed compared to the bed bug pronotums (Fig. c and d). The scanning electron microscope analysis revealed a detailed visualization of the species. The pronotum of C. hirundinis (Fig. j) compared to C. lectularius and C. hemipterus is remarkably less concave. At the pronotum sides, the bristles of C. hirundinis (Fig. i) are fine, longer and more numerous than the bed bug bristles (Fig. e and g), including on the whole-body surface (Fig. a and b). The C. lectularius pronotum bristle shape (forked-sharpened, showing jagged crowns) appeared to be identical to C. hirundinis pronotum bristles, but they were shorter and thicker (Fig. g and i). However, C. hemipterus bristles seemed to be smoother and not jagged (Fig. e). The C. hirundinis male intromittent genital organ is illustrated in Additional file : Figure S1a and c and the female paragenital sinus in Additional file : Figure S1b and d. Concerning the nymphs, we grouped the nymphs into stage 2, stage 3 and stage 4 based on body size (Additional file : Figure S2). The morphological identification of C. hirundinis was confirmed by molecular tools. Five sequences of adults and another four sequences of nymphs were successfully obtained using the COI gene. NCBI BLAST analysis of COI sequences from nine specimens of C. hirundinis added to our MALDI-TOF MS in-house database revealed that they were 98.66–99.12% identical to Oeciacus hirundinis (GenBank accession nos. MG596808-GU985544) (Table ). To see the position of the obtained sequences among the GenBank COI sequences, a phylogenetic tree was constructed on the basis of COI fragment sequences. The phylogenetic tree showed that the sequence of C. hirundinis clustered with the sequence of O. hirundinis deposited in the GenBank and grouped in the Cimicinae subfamily (Fig. ). The sequences of the COI gene of C. hirundinis were deposited in the GenBank. The FASTA of the COI sequence is attached in Additional file : Dataset S1. C. hirundinis adults and nymphs Sixty-five specimens of adults and 50 nymphs (stage 2, 3 and 4), randomly selected, were subjected to MALDI-TOF MS to identify the specimens and assess the reproducibility as well as the specificity of MS spectra. Spectral profile analysis using Flex analysis software showed that 62/65 (95.38%) of C. hirundinis at the adult stage and 50/50 (100%) nymphs provided good-quality MS spectra (Table ) (Fig. a). The principal component analysis using ClinProTools v.2.2 software revealed a clear distinction between nymphal and adult stages (Fig. b). The results were confirmed by the MS dendrogram (Fig. a) using MALDI-Biotyper 3.0. The MS protein profiles of both stages revealed sufficient discrimination between the MS spectra of adults and nymphs. However, clustering was not obtained for the nymphal stages (Fig. a). C. hirundinis adults and nymphs To validate the species identification by MALDI-TOF MS, five high-quality spectra of the adult stage and four spectra of the nymphal stage were randomly selected for database creation using MALDI-Biotyper 3.0 (Table ). A blind test against our MS in-house database was carried out using the remaining spectra. The results showed that 100% (57/57) adult stage specimens were correctly identified with LSVs between 1.922 and 2.518. Cimex hirundinis nymphs were accurately identified, 100% (46/46), at the species level with LSVs between 2.188 and 2.665 (Table ). All log-score values obtained for adults and nymphs are represented in Fig. b. For nymphs, the LSV mean was 2.461 ± 0.100 and the median was 2.476. For adults, the LSV mean was 2.328 ± 0.143 and the median was 2.363. One hundred fifteen specimens of C. hirundinis , including 65 adults and 50 nymphs, were first screened for the detection of microorganisms by qPCR. Of the C. hirundinis specimens, 99% (113/115) tested positive for Wolbachia spp. ( 23S ) (Table ). Four sequences of 16S obtained from specimens positive for Wolbachia were identical to each other and showed 100% homology with the sequence of Wolbachia massiliensis isolated in C. hemipterus collected in Senegal (GenBank accession no. CP061738). Similarly, the NCBI BLAST analysis of ftsZ sequences revealed that the sequences were 100% identical to the sequence of W. massiliensis (GenBank accession no. CP061738) (Table ). The phylogenetic analyses using the maximum likelihood method showed that the obtained sequences belonged to the new T supergroup and clustered with W. massiliensis isolated from C. hemipterus for both genes (Fig. a and b). No Bartonella spp., Rickettsia spp., Borrelia spp. and C. burnetii were detected. MALDI-TOF MS has increasingly been used in the clinical microbiology field for rapid and reliable microorganism classification, and its advantages are currently driving its application in routine microbiological laboratories . More recently, this proteomic tool has also proven its effectiveness in malacology . Several entomological researchers have reported the usefulness of MALDI-TOF MS as a time-saving, effective and less laborious approach for the identification of various arthropods (ticks, fleas, mosquitoes, bed bugs, biting midges, triatomines), targeting different body parts used for protein extraction that generate specific spectra for each species . The identification of species of the genus Cimex is complicated because it is based on different proportional measurements of the ratio between the length and the width of the pronotum and the length of bristles on the sides of the pronotum . In addition, the use of this criterion alone is not enough to differentiate between species because of the closeness to the cutoff ratio of 2.5 . Therefore, in the present work, we showed the usefulness of MALDI-TOF MS as a complementary and alternative tool to rapidly identify swallow bug species ( C. hirundinis ) stored at − 20 °C, without requiring any entomological expertise. MALDI-TOF MS sample preparation is conditioned by different parameters (body part used, preservation method, extraction solution volume adjusted for protein extraction, homogenization method) that can affect the MS spectra quality . Cimex hirundinis is the smallest species in Europe . Furthermore, if we compare this species ( C . hirundinis ) morphologically with the two bed bug species, it is noticeably smaller (Fig. a). Consequently, for the MS identification of nymphs, we selected the cephalothorax (head and thorax) as the body part and the extraction solution volume was adjusted to 15 µl, as the nymphs were smaller than adults. Moreover, we used glass beads as disruptors because they provided a simple, practical sample preparation and do not require any previous experience compared to glass powder. The current findings agree with previous studies on the ability of MALDI-TOF MS to distinguish between arthropod species . Based on the morphological criteria, the specimens collected from the house martin swallows were all identified as C. hirundinis at different stages, including adults and nymphs. In this study, we report the first case of swallow bug ( C. hirundinis ) invasion of a habitation in France (Fig. b–d). Hansel et al. reported a similar case of human infestation by martin bugs ( C. hirundinis ) in Italy. Moreover, other human infestation cases with swallow bugs have been reported in the US and Japan . To date, in France, in addition to C. hirundinis we have found two species ( C. hemipterus and C. lectularius ) that bite and feed on human blood . In the past, Lugger (1896) stated that bugs comparable to human bed bugs attacked swallows and bats. Those bugs were spotted in swallow nests, and they often reached human habitations, but morphologically the body was relatively smaller. Also, it has been previously reported that the American swallow bug ( C. vicarius ) was identical in general shape to the common bed bug ( C. lectularius ), but that swallow bug was smaller and had more bristles . In the US, in the 1890s and 1900s, swallow bugs seem to have infested human habitations and been misidentified as bed bugs . Currently, most people would find it difficult to differentiate between C. lectularius and C. hemipterus . Even if positively identifying bed bug infestation, they probably would not be able to discriminate the species. Consequently, when a bed bug infestation is considered, it is crucial to examine and identify the species, because in some cases the infestation might be due to either swallow bugs or bat bugs . In such cases, the application of the MALDI TOF MS approach is very interesting and recommended because it allows rapid and specific identification of the bugs, particularly when morphological identification becomes problematic for clinicians at the species level . As recently stated, bed bugs were probably either misidentified as cockroaches because of their small size or not considered as insects at all in their early stages . Here, we highlight the advantage of the MALDI-TOF MS technique to circumvent the drawbacks of morphological identification. The upgrade of our in-house database with MS reference spectra of the relative method resulted in 46/46 (100%) C. hirundinis nymphs and 57/57 (100%) C. hirundinis adult specimens (Table ) correctly and reliably identified at the species level with LSV > 1.8. Nymph identification was based only on size, and we estimated three stages, stage 2, 3 and 4. The dendrogram of MS spectra confirmed that the clustering was not observed according to nymphal stages, and this is explained by the fact that the nymph sizes were so diverse. Consequently, we could not perform a robust interpretation of results, but this does not affect the reliability of this tool. However, further studies are necessary to precisely identify the five nymphal stages of laboratory-reared C. lectularius and C. hemipterus . In the current study, we demonstrated the strength of the congruence among MALDI-TOF MS, morphological and molecular identification. There was no ambiguity in the identification at the species level, which shows that this proteomic tool is fully valid, in concordance with other previous studies . The molecular data analysis was based on the COI gene. This marker is widely used for different taxonomic and phylogenetic questions within Cimicidae and in the genus Cimex . Balvin et al. proposed the genus Oeciacus as a synonym of Cimex based on molecular data analyses. Like Schuh and Weirauch , we followed this proposition in our publication. In the microorganism screening section of our work, no pathogens were detected. To the best of our knowledge, infectious agents have not yet been documented in C. hirundinis , and it has never been proposed as a vector for human pathogens. However, one report mentioned C. hirundinis as a potential vector of paramyxovirus type 4 (0.1% infection rate in adult bugs and 0.4% in second to fifth nymphal stages, showing transstadial transmission) . On the other hand, some arboviruses have been isolated from C. vicarius , emphasizing its vectorial role in transmitting the Buggy Creek virus, which causes western equine encephalitis. This raises a question about possible transmission to humans and livestock . We detected a novel endosymbiont, Wolbachia , previously reported as W. massiliensis in the C. hirundinis studied here. Recently, this new strain was first isolated in C. hemipterus collected from Senegal and then described as a new clade (Clade T) . In our study, we report for the first time to our knowledge a novel Wolbachia in the genus Cimex , specifically in the European swallow bug ( C. hirundinis ), of which the sequences had 100% homology with both 16S and ftsZ sequences belonging to the T-supergroup strain ( W. massiliensis ). Based on previous studies, the infection of Wolbachia species of the F clade was common in the Cimicinae subfamily, and infection in the A clade is prevalent in the Afrocimicinae and Haematosiphoninae subfamilies . At present, T-supergroup infections are introduced in two different species, C. hirundinis and C. hemipterus , which belong to the Cimicidae subfamily and originated from two different continents. Conversely, Wolbachia in the American swallow bug ( C. vicarius ) is phylogenetically classified in the F-supergroup . In addition, a detailed study on Wolbachia diversity in bed bugs ( C. lectularius ) collected from different locations in France as well as other studies reported that the Wolbachia strain detected in C. lectularius belonged to the F supergroup . Also, our findings revealed that the prevalence of Wolbachia obtained in studied C. hirundinis was visibly higher than that reported in C. lectularius from France . In our study, we report for the first time to our knowledge the phylogenetic characterization of Wolbachia infecting C. hirundinis, revealing its classification in a new, recently discovered supergroup (T supergroup) associated with C. hemipterus . Thus, we support the suggestion made by Ros et al. on the possibility of discovering other supergroups that taxonomically enlarge Wolbachia diversity as various potential host species are examined and screening methods improve. In the present study, we report for the first time to our knowledge a case of human infestation by swallow bugs ( C. hirundinis ) in France. This raises awareness of a new infestation that may easily be mistaken for bed bug infestation. Accordingly, in bed bug control missions, it is recommended to identify bird nests in the buildings or the surroundings, and the bugs involved, to avoid infestation recurrences. In addition, we showed the usefulness and robustness of MALDI-TOF MS in the rapid identification of adults and nymphs of C. hirundinis specimens with minimal sample requirements. However, further studies are required to validate the reliability of the MALDI-TOF MS protocol for other Cimicidae species to be fully incorporated in diagnostic routines. In this work, we also seized the opportunity to phylogenetically characterize the novel Wolbachia strain ( W. massiliensis ) infecting C. hirundinis and compared it to other recognized Wolbachia clades obtained from different arthropods. It is necessary to note the importance of identifying Wolbachia diversity in certain Cimicids infesting humans for control purposes. Additional file 1: Figure S1. Digital microscope (DM) and scanning electron microscope images (SEM) showing a ventral view of C. hirundinis : male intromittent organ ( a and c ); female paragenital sinus ( b and d ). Additional file 2: Figure S2. SEM showing a representation of an egg and nymphs at different stages of C. hirundinis . a egg, b nymph II, c nymph III, d nymph IV. The nymphal stages were identified based on body size. Additional file 3: Dataset S1. COI sequences of C. hirundinis adults and nymphs. Additional file 4: Dataset S2. 16S and ftsZ sequences of Wolbachia isolated from C. hirundinis . |
Characterization of non-O157 enterohemorrhagic | d11e6da6-0149-42b8-9357-e089cc300470 | 11580514 | Microbiology[mh] | Most E. coli strains are harmless . By acquiring mobile genetic elements, through horizontal gene transfer, normally harmless E. coli can rapidly transform into a pathogen with a high adaptive capacity. Besides the prominent role of E. coli to be a contributor to intestinal diarrheal diseases, the majority of pathogenic isolates also result in extra-intestinal illnesses which are known as extraintestinal pathogenic E. coli (ExPEC) . They can also lead to more than two million annual fatalities . Feces and wastewater treatment of plants can both release infectious E. coli into the environment . Raw milk and cheese can be a significant source of hazardous pathogenic E. coli to people . There are nine different pathotypes of Diarrheagenic E. coli strains, including enterohemorrhagic E. coli (EHEC), enterotoxigenic E. coli (ETEC), enteropathogenic E. coli (EPEC), enteroinvasive E. coli (EIEC), enteroaggregative E. coli (EAEC), diffusely-adhering E. coli (DAEC), Shiga toxin-producing E. coli (STEC), adherent-invasive E. coli (AIEC), and cell-detaching E.coli (CDEC) . The STEC are pathogenic bacteria that infect people through contaminated food and water. Shiga toxins (stx) are produced during STEC infections. Infection outcomes are determined by an array of strain and host variable and symptoms can range from mild to severe bloody diarrhoea, with HUS being a possible consequence. The EHEC is a prominent category of STEC that can produce either one or more stx which make up the main virulence characteristic of this pathogroup of E. coli . This pathotype causes a variety of infections, from barely detectable diarrhea to more serious manifestations such as the HC and the emergence of the potentially fatal HUS. EHEC infections are the leading factor of children’s acute renal failure in many countries, making kids and infants the most vulnerable patients . Whereas not all STEC induce HC or HUS, EHEC is generally reserved for those that do . The serotype O157:H7 is regarded as the prototype of this pathogenic group as it was the initial cause the HC and HUS cases in the 80 s and since then it has become a contributor to many outbreaks across the globe . While O157:H7 is the most prominent EHEC serotype, non-O157 EHEC strains are now recognized as hazardous causes of disease. O26, O103, O111, and O145 are the most prevalent non-O157 O-serogroups in Germany, accounting for one-third of EHEC outbreaks including severe diarrhea and life-threatening HUS . These serogroups' detection rate has increased worldwide . This increase may have been caused by increased awareness among labs or the creation of improved detection procedures, however, it may have been a result of the greater prevalence of infections with these pathogens that are linked to human and animal disease . Toxin production, the formation of biofilm, iron acquisition systems, serum resistance ability, capsules, and adhesins are just a few of the many virulence factors that E. coli possesses . These factors encourage tissue colonization, cause damage, and promote the spread of diseases. Due to these virulence traits, the microorganism can colonize anatomical environments, override host defense mechanisms, and start causing an inflammatory response in the host . Shiga toxin 1 and 2, intimin, and enterohemolysin play vital roles in the O157 EHEC pathogenic mechanism . The genus Escherichia is represented by a wide range of species including E. coli , E. albertii , E. fergusonii , and five cryptic Escherichia clades (I–V) . Because of the nucleotide identity identified between clade I and E. coli strains, Escherichia clade I should be regarded as a subspecies of E. coli . Based on the most recent phylogenetic assortment, E. coli strains have been assembled into eight phylogenetic classes that are A, B1, B2, C, D, E, F, and clade I . Previous studies have shown that nonpathogenic strains belonged to A and B1 phylogenetic groups, meanwhile pathogenic ones belonged to B2 and D phylogenetic groups and . As a result, phylogenetic clustering of E. coli strains is useful for displaying the relationship between phylotypes and diseases induced by the organism. Determining the source of infection and the specific types of pathogens using various approaches such as molecular typing is a necessity to investigate the prevalence of hospitalized infections . Moreover, accurate and quick molecular detection techniques are required to recognize pathogenic E. coli in food and animals in order to improve food safety and human health, as well as to minimize geographical extent of outbreaks . Enterobacterial repetitive intergenic consensus polymerase chain reaction (ERIC-PCR) technique serves as DNA fingerprinting tool for assessing bacterial clonal variability . As an outcome, ERIC-PCR was applied to examine the genetic similarity of EHEC isolates obtained from diverse sources. This also allows for the evaluation of the variable possible contamination sources . Traditional molecular typing techniques such as phage typing, antibiotic resistance patterns analysis and plasmid profiling might be labor intensive, complex, difficult to distinguish between strains with high genetic similarity and are poorly transportable because they index variation that is hard to compare across laboratories. As a result, in epidemiological investigations, these approaches are not ideal for determining the source of infection . To solve these issues, sequence-based typing methods have become the gold standard for epidemiological monitoring to study microbial population genetics. Multi-locus Sequence Typing (MLST) is a typing method that is widely used for clinically relevant bacterial species, and online databases have been developed, allowing data to be analyzed simply . The MLST technique generates an allelic profile by analyzing the sequences of seven housekeeping genes. The produced allelic profile is summarized by the assignment of a sequence type in an electronic database. The closely related species are categorized into clonal complexes . Bearing in mind the possibility of transmission of pathogenic E. coli to humans through consumption of contaminated food, this study aims to detect the prevalence of non-O157 EHEC isolates among different clinical, food and sewage sources in Egypt. In addition, monitoring of various virulence determinants is important to detect the potential of pathogenesis of these emerging strains. Specimens’ collection During the period from November 2018 to April 2020, a total of 118 clinical samples were collected, including 41 and 77 isolates from urine and stool, respectively and a total of 217 samples from different food sources including 93 animal product (meat, luncheon, smoked turkey, beef burger, ground beef and pastrami), 89 dairy products such as (cheese, yogurt and raw milk), 11 Fresh vegetables isolates (colored pepper, lemon and tomatoes), 15 chicken isolates (cooked and roasted), two fish isolates and seven sewage water isolates (supplementary Table 1). Urine and stool samples were obtained from Mansoura university hospitals and private medical analysis laboratories from separate human cases after a written informed consent from the participant. The experimental protocol used in this research adheres to the ethical guidelines and principles of care, use, and handling of human subjects in medical research established by "The Research Ethics Committee, Faculty of Pharmacy, Mansoura University, Egypt" (code: 2022–126), which is governed by the World Medical Association's Code of Ethics (Declaration of Helsinki). Food samples (meat and dairy products) were obtained from one hundred eighty-two butchers’ shops and supermarkets, chicken isolates were obtained from fifteen poultry shops and fish isolates from two separate fish shop in Mansoura and Damietta cities, Egypt. Sewage isolates were obtained from sewage water from different places in Mansoura and Damietta cities. The fecal, meat, food samples were collected in sterile plastic containers and the milk, yogurt, sewage water and urine samples were collected in sterile falcons. Each individual sample was kept cool in icebox and transferred directly to the Microbiology and Immunology laboratory at the Faculty of Pharmacy, Mansoura university and stored at 4 °C till use; all samples were processed as fast as possible to avoid deterioration and contamination of samples. Isolation and identification of E. coli The collected specimens were cultivated for isolation of E. coli in nutrient broth media that contain yeast extract (0.2%), peptone (0.5%), sodium chloride (0.5%) and incubated for 24 h at 37 °C. Then, appropriate inoculum subjected to subsequent isolation on MacConkey's agar selective media plates which were incubated at 37 °C for 24 h. All plates were then examined for colony morphology. The suspected bacterial colonies that were identified by observation of lactose fermentation on MacConkey`s agar media and negative Gram staining were picked up and inoculated for further confirmation on Eosin methylene blue (EMB) agar plates . Colonies with the characteristic green metallic sheen on EMB agar were subjected to additional confirmation using biochemical tests . Food and sewage water samples were isolated according to Dickson et al. 1995 , where food samples (approximately 5 g) and liquid samples about 5 mL were homogenized before isolation and suspended in ratio of 1:10 in peptone buffered saline and incubated with shaking at 37ºC for 24–48h, then plating 100 µL of each agitated sample onto MacConkey's agar plates. Produced colonies were sub-cultured into nutrient agar slant and were subjected to various biochemical tests intended for conventional identification of E. coli . Detection of O-serogroups The confirmed E. coli isolates were submitted to Center of food analysis, Faculty of Veterinary Medicine, Benha university, Egypt. The serotyping was performed using sets of E. coli antisera for rapid diagnosis (Denka Seiken Co., Japan) . Phenotypic identification of some virulence determinants of EHEC isolates Biofilm assay using microtiter plate method Formation of biofilm was determined quantitatively using a 96 well flat-bottomed polystyrene microtiter plate . A pure colony from an overnight culture streaked on tryptic soy agar (TSA) (Sigma) was sub-cultured into 5 mL tryptic soy broth (TSB) (Sigma) supplemented with 0.25% anhydrous glucose for 24 h at 37 °C. Aliquots of 200 µL of pre-adjusted culture (OD 600 = 0.25) were placed in four adjacent wells, and a negative control containing only TSB was included. After incubation of the plate at 37 °C without shaking for 18–24 h, the contents of the wells were aspirated and rinsed three times with 200µL PBS (pH = 7.4) to eliminate any non-adherent cells. The adherent cells were fixed using 150µL of absolute methanol for 15 min, then left to dry. The fixed biofilm was then stained with 150µL of 1% w/v crystal violet for 20 min. Following that, the plate was rinsed three times with distilled water and dried in the air. The stained biofilm was resolubilized by adding 150µL of (33% v/v) glacial acetic acid per well, and the OD of the tested isolates was determined at 490 nm using a microtiter plate reader. The mean of the four ODs for each isolate was calculated. The capacity of biofilm formation was assessed according to Schönborn et al., 2017 . Briefly, cut-off optical density (ODc) was set as three standard deviations above the mean OD of the negative control. Strains were categorized as: non-biofilm producer, NBP (OD ≤ ODc); weak biofilm producer, WBP (ODc < OD WBP ≤ 2 × ODc); moderate biofilm producer, MBP (2 × ODc < OD MBP ≤ 4 × ODc) and strong biofilm producer, SBP (OD SBP > 4 × ODc). Swimming motility assay Motility evaluation of EHEC isolates was performed according to Murinda et al., 2002, shields and Cathcart, 2012 . Shortly, the bacterial culture was tested for motility using triphenyltetrazolium chloride (TTC) media. After 24–48 h of incubation at 37 °C, growth was indicated by the appearance of red color. As motility occurred, red color was visible surrounding the inoculation area. Hemagglutination assay All isolates were tested for their ability to agglutinate human erythrocytes using slide agglutination technique using human red blood cells (RBCs) (O-type) obtained from Blood Bank Gastroenterology Hospital, Mansoura university. Isolates were inoculated in nutrient broth media and incubated at 37°C for 48 h. Suspension of RBCs was prepared by washing of human RBCs (O-type) three times with physiological saline (0.9% NaCl) then, resuspended in saline to final conc of 3%. One drop of the bacterial culture was blended with one drop of the RBC suspension on a clean glass slide, and the mixture was vigorously stirred by a sterile tip to promote agglutination. The presence of erythrocyte clumping within 5 min of mixing was considered a positive agglutination result. Screening of hemolysin activity EHEC isolates' hemolytic activity was determined by streaking onto 5% blood agar plates. Plates were evaluated after incubation for 24 h at 37 °C . Hemolysin production was determined quantitatively according to Rossignol et al., 2008 . All EHEC isolates were cultured in TSB at 37 °C for 24 h with shaking and bacterial extract was obtained by centrifugation at 10,000 rpm for 10 min. A mixture of 500µL RBC suspension (2% O-type RBCs in 10 mM Tris HCl, pH 7.4) and 500µL bacterial extract previously prepared according to Rossignol was incubated for 2 h at 37 °C. A positive control (T) using 500µL 0.1% sodium dodecyl sulphate (SDS) and a negative control (B) using 500µL TSB with 500µL RBC suspension were included. After centrifugation at 10,000 rpm for 10 min, the amount of hemoglobin liberated in each sample (X) was measured at 540 nm. Three replicates were performed for each isolate. Hemolysis percentage was determined as follow: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{Hemolysis}\%= ((\text{X}-\text{B})/(\text{T}-\text{B})) \times 100$$\end{document} Hemolysis % = ( ( X - B ) / ( T - B ) ) × 100 Serum resistance assay Serum resistance of the strains was analyzed, as described by Vandekerchove et al., 2005 using a turbidimetric assay . In a 96-well microtiter plate, 50 μL of bacterial culture (OD600 = 0.1) was combined with 150 μL of normal human serum obtained from Blood Bank Gastroenterology Hospital, Mansoura university. Using a microplate reader, the initial and final absorbance (after 3 h of incubation at 37 °C) were measured at 600 nm. The absorbance of each isolate was estimated by taking the average of three replicates. The percentage of remaining absorbance relative to the initial absorbance was calculated. The strain is classified as serum resistant (SR) if the remaining absorbance after 3 h was higher than 150%, intermediate resistant (IR) if it was between 125 and 150%, slow-intermediate resistant (S-I) if percent was between 100 and 125% and classified as serum sensitive (SS) if it was less than 100%. Molecular detection of some virulence genes Genomic DNA extraction The DNA templates were extracted from tested isolates by boiling method previously reported by Said et al., 2018 . Concisely, purified E. coli colonies were suspended in 100 μL of sterile DNase free water in 0.2 mL sterile PCR tube and subjected to heat block in thermocycler at 95 °C for 10 min, followed by centrifugation at 10,000 rpm for 3 min and stored in −20°C. Supernatant served as DNA template for all the following PCR sets. Screening of some toxin genes by PCR Shiga toxin I (stx1), Shiga toxin II (stx2), intimin (eae) and hemolysin (ehxA, ehlyA, hlyA, ehlyA and sheA) genes were screened among the tested isolates using PCR. The reaction mixture of a total volume 25 μL consisted of 12.5 μL master mix (DreamTaq Green master), 2 μL of template DNA, 0.5µL from each primer listed in supplementary Table 2 (Invitrogen TM, UK) and the final volume completed with DNase free water. stx gene subtyping Subtyping of stx1 and stx2 genes was carried out as previously reported using primer pairs listed in supplementary Table 2. Amplification was carried out using a DNA thermocycler with a predetermined cycling condition: initial denaturation at 94 °C for 5 min followed by 35 cycles each consisted of denaturation at 94 °C for 40 s, annealing at the temperature as stated in supplementary Table 2 and extension at 72 °C for 1 min, then final extension at 72 °C for 10 min. The amplified products were electrophoresed on 1.25% agarose gel stained with ethidium bromide and photographed under ultraviolet light (UV) light and compared with a DNA marker (Gene Ruler 100 bp, Thermo Fisher Scientific Tm, UK) to detect their sizes. Clermont’s phylogenetic typing Phylotyping of EHEC isolates was performed as previously described by Clermont et al., 2013 . Amplicons were electrophoresed using 1.25% agarose gel stained with ethidium bromide and photographed under UV light. Phylotypes of EHEC isolates were assigned as: A, B1, B2, C, D, E, F, and Clade I. Molecular typing by enterobacterial repetitive intergenic consensus PCR (ERIC-PCR) Molecular genotyping of isolates was performed using (ERIC-PCR) using specific primers (supplementary Table 2) . A total reaction of 25 µL consisted of 12.5µL of Dream Taq™ Green PCR Master Mix (2x), 0.5 µL of ERIC-1 (10 µM), 0.5 µL of ERIC-2 (10 µM), 8.5 µL of nuclease-free water and 3µL of template DNA. Amplification was conducted with the following conditions: initial denaturing at 94 °C for 5 min, 35 cycles of denaturation at 94 °C for 40 s annealing at 48 °C for 1 min, extension at 72 °C for 1.5 min, then the reaction was terminated by a final extension of 72 °C for 10 min. The amplified DNA fragments were separated on 2% agarose gel and compared with 100 bp plus DNA marker (Thermo Fisher Scientific™, UK) Cat. No. (SM0323). Following this, the gel was examined and evaluated using gelJ software. A similarity matrix was generated using Dice's coefficient and subsequently, the corresponding dendrogram was created utilizing the unweighted-pair group method with arithmetic averages (UPGMA) . Multi locus sequence typing (MLST) Purification of PCR products Amplification of MLST genes was performed using the previously described protocol using the primers listed in (supplementary Table 2). PCR reaction was performed in a 100 μL reaction mixture containing 12 μL of the template DNA, 50 μL of Dream Taq™ Green PCR master mix (2X), each of the forward and reverse primer 2 μL and 34 μL of nuclease-free water. The standard cycling procedure was performed following the conditions: initial denaturation for 2 min at 95 °C, 35 cycles of denaturation at 95 °C for 1 min, annealing at variable temperature stated in (supplementary Table 2) for 30 s, extension at 72 °C for 2 min, and final extension at 72 °C for 7 min. The PCR products were analyzed by gel electrophoresis on 1.25% agarose gel stained with ethidium bromide and photographed under UV light, then the products with the target size were precisely cut from the gel and purified using Gene Jet Gel Extraction Kit (Thermo Scientific™). DNA sequencing The purified PCR products of the MLST were sequenced according to the protocol of applied biosystems big dye terminator v3.1 DNA sequencing reaction at Colors Medical Lab, Cairo, Egypt. Nucleotide sequences of the housekeeping genes were submitted to the E. coli MLST Database ( https://pubmlst.org/bigsdb?db=pubmlst_escherichia_seqdef ), and the EnteroBase Database ( https://enterobase.warwick.ac.uk/species/ecoli/allele_st_search ) to determine the sequence types and clonal complex. Statistical analysis and data interpretation The statistical analysis of the data was performed using GraphPad Prism (version 5.01), which involved the application of the chi-square test and Fisher's exact test, and Monte Carlo test. The significance of the obtained results was judged at p -value < 0.05. During the period from November 2018 to April 2020, a total of 118 clinical samples were collected, including 41 and 77 isolates from urine and stool, respectively and a total of 217 samples from different food sources including 93 animal product (meat, luncheon, smoked turkey, beef burger, ground beef and pastrami), 89 dairy products such as (cheese, yogurt and raw milk), 11 Fresh vegetables isolates (colored pepper, lemon and tomatoes), 15 chicken isolates (cooked and roasted), two fish isolates and seven sewage water isolates (supplementary Table 1). Urine and stool samples were obtained from Mansoura university hospitals and private medical analysis laboratories from separate human cases after a written informed consent from the participant. The experimental protocol used in this research adheres to the ethical guidelines and principles of care, use, and handling of human subjects in medical research established by "The Research Ethics Committee, Faculty of Pharmacy, Mansoura University, Egypt" (code: 2022–126), which is governed by the World Medical Association's Code of Ethics (Declaration of Helsinki). Food samples (meat and dairy products) were obtained from one hundred eighty-two butchers’ shops and supermarkets, chicken isolates were obtained from fifteen poultry shops and fish isolates from two separate fish shop in Mansoura and Damietta cities, Egypt. Sewage isolates were obtained from sewage water from different places in Mansoura and Damietta cities. The fecal, meat, food samples were collected in sterile plastic containers and the milk, yogurt, sewage water and urine samples were collected in sterile falcons. Each individual sample was kept cool in icebox and transferred directly to the Microbiology and Immunology laboratory at the Faculty of Pharmacy, Mansoura university and stored at 4 °C till use; all samples were processed as fast as possible to avoid deterioration and contamination of samples. E. coli The collected specimens were cultivated for isolation of E. coli in nutrient broth media that contain yeast extract (0.2%), peptone (0.5%), sodium chloride (0.5%) and incubated for 24 h at 37 °C. Then, appropriate inoculum subjected to subsequent isolation on MacConkey's agar selective media plates which were incubated at 37 °C for 24 h. All plates were then examined for colony morphology. The suspected bacterial colonies that were identified by observation of lactose fermentation on MacConkey`s agar media and negative Gram staining were picked up and inoculated for further confirmation on Eosin methylene blue (EMB) agar plates . Colonies with the characteristic green metallic sheen on EMB agar were subjected to additional confirmation using biochemical tests . Food and sewage water samples were isolated according to Dickson et al. 1995 , where food samples (approximately 5 g) and liquid samples about 5 mL were homogenized before isolation and suspended in ratio of 1:10 in peptone buffered saline and incubated with shaking at 37ºC for 24–48h, then plating 100 µL of each agitated sample onto MacConkey's agar plates. Produced colonies were sub-cultured into nutrient agar slant and were subjected to various biochemical tests intended for conventional identification of E. coli . The confirmed E. coli isolates were submitted to Center of food analysis, Faculty of Veterinary Medicine, Benha university, Egypt. The serotyping was performed using sets of E. coli antisera for rapid diagnosis (Denka Seiken Co., Japan) . Biofilm assay using microtiter plate method Formation of biofilm was determined quantitatively using a 96 well flat-bottomed polystyrene microtiter plate . A pure colony from an overnight culture streaked on tryptic soy agar (TSA) (Sigma) was sub-cultured into 5 mL tryptic soy broth (TSB) (Sigma) supplemented with 0.25% anhydrous glucose for 24 h at 37 °C. Aliquots of 200 µL of pre-adjusted culture (OD 600 = 0.25) were placed in four adjacent wells, and a negative control containing only TSB was included. After incubation of the plate at 37 °C without shaking for 18–24 h, the contents of the wells were aspirated and rinsed three times with 200µL PBS (pH = 7.4) to eliminate any non-adherent cells. The adherent cells were fixed using 150µL of absolute methanol for 15 min, then left to dry. The fixed biofilm was then stained with 150µL of 1% w/v crystal violet for 20 min. Following that, the plate was rinsed three times with distilled water and dried in the air. The stained biofilm was resolubilized by adding 150µL of (33% v/v) glacial acetic acid per well, and the OD of the tested isolates was determined at 490 nm using a microtiter plate reader. The mean of the four ODs for each isolate was calculated. The capacity of biofilm formation was assessed according to Schönborn et al., 2017 . Briefly, cut-off optical density (ODc) was set as three standard deviations above the mean OD of the negative control. Strains were categorized as: non-biofilm producer, NBP (OD ≤ ODc); weak biofilm producer, WBP (ODc < OD WBP ≤ 2 × ODc); moderate biofilm producer, MBP (2 × ODc < OD MBP ≤ 4 × ODc) and strong biofilm producer, SBP (OD SBP > 4 × ODc). Swimming motility assay Motility evaluation of EHEC isolates was performed according to Murinda et al., 2002, shields and Cathcart, 2012 . Shortly, the bacterial culture was tested for motility using triphenyltetrazolium chloride (TTC) media. After 24–48 h of incubation at 37 °C, growth was indicated by the appearance of red color. As motility occurred, red color was visible surrounding the inoculation area. Hemagglutination assay All isolates were tested for their ability to agglutinate human erythrocytes using slide agglutination technique using human red blood cells (RBCs) (O-type) obtained from Blood Bank Gastroenterology Hospital, Mansoura university. Isolates were inoculated in nutrient broth media and incubated at 37°C for 48 h. Suspension of RBCs was prepared by washing of human RBCs (O-type) three times with physiological saline (0.9% NaCl) then, resuspended in saline to final conc of 3%. One drop of the bacterial culture was blended with one drop of the RBC suspension on a clean glass slide, and the mixture was vigorously stirred by a sterile tip to promote agglutination. The presence of erythrocyte clumping within 5 min of mixing was considered a positive agglutination result. Screening of hemolysin activity EHEC isolates' hemolytic activity was determined by streaking onto 5% blood agar plates. Plates were evaluated after incubation for 24 h at 37 °C . Hemolysin production was determined quantitatively according to Rossignol et al., 2008 . All EHEC isolates were cultured in TSB at 37 °C for 24 h with shaking and bacterial extract was obtained by centrifugation at 10,000 rpm for 10 min. A mixture of 500µL RBC suspension (2% O-type RBCs in 10 mM Tris HCl, pH 7.4) and 500µL bacterial extract previously prepared according to Rossignol was incubated for 2 h at 37 °C. A positive control (T) using 500µL 0.1% sodium dodecyl sulphate (SDS) and a negative control (B) using 500µL TSB with 500µL RBC suspension were included. After centrifugation at 10,000 rpm for 10 min, the amount of hemoglobin liberated in each sample (X) was measured at 540 nm. Three replicates were performed for each isolate. Hemolysis percentage was determined as follow: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{Hemolysis}\%= ((\text{X}-\text{B})/(\text{T}-\text{B})) \times 100$$\end{document} Hemolysis % = ( ( X - B ) / ( T - B ) ) × 100 Serum resistance assay Serum resistance of the strains was analyzed, as described by Vandekerchove et al., 2005 using a turbidimetric assay . In a 96-well microtiter plate, 50 μL of bacterial culture (OD600 = 0.1) was combined with 150 μL of normal human serum obtained from Blood Bank Gastroenterology Hospital, Mansoura university. Using a microplate reader, the initial and final absorbance (after 3 h of incubation at 37 °C) were measured at 600 nm. The absorbance of each isolate was estimated by taking the average of three replicates. The percentage of remaining absorbance relative to the initial absorbance was calculated. The strain is classified as serum resistant (SR) if the remaining absorbance after 3 h was higher than 150%, intermediate resistant (IR) if it was between 125 and 150%, slow-intermediate resistant (S-I) if percent was between 100 and 125% and classified as serum sensitive (SS) if it was less than 100%. Formation of biofilm was determined quantitatively using a 96 well flat-bottomed polystyrene microtiter plate . A pure colony from an overnight culture streaked on tryptic soy agar (TSA) (Sigma) was sub-cultured into 5 mL tryptic soy broth (TSB) (Sigma) supplemented with 0.25% anhydrous glucose for 24 h at 37 °C. Aliquots of 200 µL of pre-adjusted culture (OD 600 = 0.25) were placed in four adjacent wells, and a negative control containing only TSB was included. After incubation of the plate at 37 °C without shaking for 18–24 h, the contents of the wells were aspirated and rinsed three times with 200µL PBS (pH = 7.4) to eliminate any non-adherent cells. The adherent cells were fixed using 150µL of absolute methanol for 15 min, then left to dry. The fixed biofilm was then stained with 150µL of 1% w/v crystal violet for 20 min. Following that, the plate was rinsed three times with distilled water and dried in the air. The stained biofilm was resolubilized by adding 150µL of (33% v/v) glacial acetic acid per well, and the OD of the tested isolates was determined at 490 nm using a microtiter plate reader. The mean of the four ODs for each isolate was calculated. The capacity of biofilm formation was assessed according to Schönborn et al., 2017 . Briefly, cut-off optical density (ODc) was set as three standard deviations above the mean OD of the negative control. Strains were categorized as: non-biofilm producer, NBP (OD ≤ ODc); weak biofilm producer, WBP (ODc < OD WBP ≤ 2 × ODc); moderate biofilm producer, MBP (2 × ODc < OD MBP ≤ 4 × ODc) and strong biofilm producer, SBP (OD SBP > 4 × ODc). Motility evaluation of EHEC isolates was performed according to Murinda et al., 2002, shields and Cathcart, 2012 . Shortly, the bacterial culture was tested for motility using triphenyltetrazolium chloride (TTC) media. After 24–48 h of incubation at 37 °C, growth was indicated by the appearance of red color. As motility occurred, red color was visible surrounding the inoculation area. All isolates were tested for their ability to agglutinate human erythrocytes using slide agglutination technique using human red blood cells (RBCs) (O-type) obtained from Blood Bank Gastroenterology Hospital, Mansoura university. Isolates were inoculated in nutrient broth media and incubated at 37°C for 48 h. Suspension of RBCs was prepared by washing of human RBCs (O-type) three times with physiological saline (0.9% NaCl) then, resuspended in saline to final conc of 3%. One drop of the bacterial culture was blended with one drop of the RBC suspension on a clean glass slide, and the mixture was vigorously stirred by a sterile tip to promote agglutination. The presence of erythrocyte clumping within 5 min of mixing was considered a positive agglutination result. EHEC isolates' hemolytic activity was determined by streaking onto 5% blood agar plates. Plates were evaluated after incubation for 24 h at 37 °C . Hemolysin production was determined quantitatively according to Rossignol et al., 2008 . All EHEC isolates were cultured in TSB at 37 °C for 24 h with shaking and bacterial extract was obtained by centrifugation at 10,000 rpm for 10 min. A mixture of 500µL RBC suspension (2% O-type RBCs in 10 mM Tris HCl, pH 7.4) and 500µL bacterial extract previously prepared according to Rossignol was incubated for 2 h at 37 °C. A positive control (T) using 500µL 0.1% sodium dodecyl sulphate (SDS) and a negative control (B) using 500µL TSB with 500µL RBC suspension were included. After centrifugation at 10,000 rpm for 10 min, the amount of hemoglobin liberated in each sample (X) was measured at 540 nm. Three replicates were performed for each isolate. Hemolysis percentage was determined as follow: \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\text{Hemolysis}\%= ((\text{X}-\text{B})/(\text{T}-\text{B})) \times 100$$\end{document} Hemolysis % = ( ( X - B ) / ( T - B ) ) × 100 Serum resistance of the strains was analyzed, as described by Vandekerchove et al., 2005 using a turbidimetric assay . In a 96-well microtiter plate, 50 μL of bacterial culture (OD600 = 0.1) was combined with 150 μL of normal human serum obtained from Blood Bank Gastroenterology Hospital, Mansoura university. Using a microplate reader, the initial and final absorbance (after 3 h of incubation at 37 °C) were measured at 600 nm. The absorbance of each isolate was estimated by taking the average of three replicates. The percentage of remaining absorbance relative to the initial absorbance was calculated. The strain is classified as serum resistant (SR) if the remaining absorbance after 3 h was higher than 150%, intermediate resistant (IR) if it was between 125 and 150%, slow-intermediate resistant (S-I) if percent was between 100 and 125% and classified as serum sensitive (SS) if it was less than 100%. Genomic DNA extraction The DNA templates were extracted from tested isolates by boiling method previously reported by Said et al., 2018 . Concisely, purified E. coli colonies were suspended in 100 μL of sterile DNase free water in 0.2 mL sterile PCR tube and subjected to heat block in thermocycler at 95 °C for 10 min, followed by centrifugation at 10,000 rpm for 3 min and stored in −20°C. Supernatant served as DNA template for all the following PCR sets. Screening of some toxin genes by PCR Shiga toxin I (stx1), Shiga toxin II (stx2), intimin (eae) and hemolysin (ehxA, ehlyA, hlyA, ehlyA and sheA) genes were screened among the tested isolates using PCR. The reaction mixture of a total volume 25 μL consisted of 12.5 μL master mix (DreamTaq Green master), 2 μL of template DNA, 0.5µL from each primer listed in supplementary Table 2 (Invitrogen TM, UK) and the final volume completed with DNase free water. stx gene subtyping Subtyping of stx1 and stx2 genes was carried out as previously reported using primer pairs listed in supplementary Table 2. Amplification was carried out using a DNA thermocycler with a predetermined cycling condition: initial denaturation at 94 °C for 5 min followed by 35 cycles each consisted of denaturation at 94 °C for 40 s, annealing at the temperature as stated in supplementary Table 2 and extension at 72 °C for 1 min, then final extension at 72 °C for 10 min. The amplified products were electrophoresed on 1.25% agarose gel stained with ethidium bromide and photographed under ultraviolet light (UV) light and compared with a DNA marker (Gene Ruler 100 bp, Thermo Fisher Scientific Tm, UK) to detect their sizes. Clermont’s phylogenetic typing Phylotyping of EHEC isolates was performed as previously described by Clermont et al., 2013 . Amplicons were electrophoresed using 1.25% agarose gel stained with ethidium bromide and photographed under UV light. Phylotypes of EHEC isolates were assigned as: A, B1, B2, C, D, E, F, and Clade I. Molecular typing by enterobacterial repetitive intergenic consensus PCR (ERIC-PCR) Molecular genotyping of isolates was performed using (ERIC-PCR) using specific primers (supplementary Table 2) . A total reaction of 25 µL consisted of 12.5µL of Dream Taq™ Green PCR Master Mix (2x), 0.5 µL of ERIC-1 (10 µM), 0.5 µL of ERIC-2 (10 µM), 8.5 µL of nuclease-free water and 3µL of template DNA. Amplification was conducted with the following conditions: initial denaturing at 94 °C for 5 min, 35 cycles of denaturation at 94 °C for 40 s annealing at 48 °C for 1 min, extension at 72 °C for 1.5 min, then the reaction was terminated by a final extension of 72 °C for 10 min. The amplified DNA fragments were separated on 2% agarose gel and compared with 100 bp plus DNA marker (Thermo Fisher Scientific™, UK) Cat. No. (SM0323). Following this, the gel was examined and evaluated using gelJ software. A similarity matrix was generated using Dice's coefficient and subsequently, the corresponding dendrogram was created utilizing the unweighted-pair group method with arithmetic averages (UPGMA) . The DNA templates were extracted from tested isolates by boiling method previously reported by Said et al., 2018 . Concisely, purified E. coli colonies were suspended in 100 μL of sterile DNase free water in 0.2 mL sterile PCR tube and subjected to heat block in thermocycler at 95 °C for 10 min, followed by centrifugation at 10,000 rpm for 3 min and stored in −20°C. Supernatant served as DNA template for all the following PCR sets. Shiga toxin I (stx1), Shiga toxin II (stx2), intimin (eae) and hemolysin (ehxA, ehlyA, hlyA, ehlyA and sheA) genes were screened among the tested isolates using PCR. The reaction mixture of a total volume 25 μL consisted of 12.5 μL master mix (DreamTaq Green master), 2 μL of template DNA, 0.5µL from each primer listed in supplementary Table 2 (Invitrogen TM, UK) and the final volume completed with DNase free water. gene subtyping Subtyping of stx1 and stx2 genes was carried out as previously reported using primer pairs listed in supplementary Table 2. Amplification was carried out using a DNA thermocycler with a predetermined cycling condition: initial denaturation at 94 °C for 5 min followed by 35 cycles each consisted of denaturation at 94 °C for 40 s, annealing at the temperature as stated in supplementary Table 2 and extension at 72 °C for 1 min, then final extension at 72 °C for 10 min. The amplified products were electrophoresed on 1.25% agarose gel stained with ethidium bromide and photographed under ultraviolet light (UV) light and compared with a DNA marker (Gene Ruler 100 bp, Thermo Fisher Scientific Tm, UK) to detect their sizes. Phylotyping of EHEC isolates was performed as previously described by Clermont et al., 2013 . Amplicons were electrophoresed using 1.25% agarose gel stained with ethidium bromide and photographed under UV light. Phylotypes of EHEC isolates were assigned as: A, B1, B2, C, D, E, F, and Clade I. Molecular genotyping of isolates was performed using (ERIC-PCR) using specific primers (supplementary Table 2) . A total reaction of 25 µL consisted of 12.5µL of Dream Taq™ Green PCR Master Mix (2x), 0.5 µL of ERIC-1 (10 µM), 0.5 µL of ERIC-2 (10 µM), 8.5 µL of nuclease-free water and 3µL of template DNA. Amplification was conducted with the following conditions: initial denaturing at 94 °C for 5 min, 35 cycles of denaturation at 94 °C for 40 s annealing at 48 °C for 1 min, extension at 72 °C for 1.5 min, then the reaction was terminated by a final extension of 72 °C for 10 min. The amplified DNA fragments were separated on 2% agarose gel and compared with 100 bp plus DNA marker (Thermo Fisher Scientific™, UK) Cat. No. (SM0323). Following this, the gel was examined and evaluated using gelJ software. A similarity matrix was generated using Dice's coefficient and subsequently, the corresponding dendrogram was created utilizing the unweighted-pair group method with arithmetic averages (UPGMA) . Purification of PCR products Amplification of MLST genes was performed using the previously described protocol using the primers listed in (supplementary Table 2). PCR reaction was performed in a 100 μL reaction mixture containing 12 μL of the template DNA, 50 μL of Dream Taq™ Green PCR master mix (2X), each of the forward and reverse primer 2 μL and 34 μL of nuclease-free water. The standard cycling procedure was performed following the conditions: initial denaturation for 2 min at 95 °C, 35 cycles of denaturation at 95 °C for 1 min, annealing at variable temperature stated in (supplementary Table 2) for 30 s, extension at 72 °C for 2 min, and final extension at 72 °C for 7 min. The PCR products were analyzed by gel electrophoresis on 1.25% agarose gel stained with ethidium bromide and photographed under UV light, then the products with the target size were precisely cut from the gel and purified using Gene Jet Gel Extraction Kit (Thermo Scientific™). DNA sequencing The purified PCR products of the MLST were sequenced according to the protocol of applied biosystems big dye terminator v3.1 DNA sequencing reaction at Colors Medical Lab, Cairo, Egypt. Nucleotide sequences of the housekeeping genes were submitted to the E. coli MLST Database ( https://pubmlst.org/bigsdb?db=pubmlst_escherichia_seqdef ), and the EnteroBase Database ( https://enterobase.warwick.ac.uk/species/ecoli/allele_st_search ) to determine the sequence types and clonal complex. Statistical analysis and data interpretation The statistical analysis of the data was performed using GraphPad Prism (version 5.01), which involved the application of the chi-square test and Fisher's exact test, and Monte Carlo test. The significance of the obtained results was judged at p -value < 0.05. Amplification of MLST genes was performed using the previously described protocol using the primers listed in (supplementary Table 2). PCR reaction was performed in a 100 μL reaction mixture containing 12 μL of the template DNA, 50 μL of Dream Taq™ Green PCR master mix (2X), each of the forward and reverse primer 2 μL and 34 μL of nuclease-free water. The standard cycling procedure was performed following the conditions: initial denaturation for 2 min at 95 °C, 35 cycles of denaturation at 95 °C for 1 min, annealing at variable temperature stated in (supplementary Table 2) for 30 s, extension at 72 °C for 2 min, and final extension at 72 °C for 7 min. The PCR products were analyzed by gel electrophoresis on 1.25% agarose gel stained with ethidium bromide and photographed under UV light, then the products with the target size were precisely cut from the gel and purified using Gene Jet Gel Extraction Kit (Thermo Scientific™). The purified PCR products of the MLST were sequenced according to the protocol of applied biosystems big dye terminator v3.1 DNA sequencing reaction at Colors Medical Lab, Cairo, Egypt. Nucleotide sequences of the housekeeping genes were submitted to the E. coli MLST Database ( https://pubmlst.org/bigsdb?db=pubmlst_escherichia_seqdef ), and the EnteroBase Database ( https://enterobase.warwick.ac.uk/species/ecoli/allele_st_search ) to determine the sequence types and clonal complex. The statistical analysis of the data was performed using GraphPad Prism (version 5.01), which involved the application of the chi-square test and Fisher's exact test, and Monte Carlo test. The significance of the obtained results was judged at p -value < 0.05. Isolation, identification, and serotyping of EHEC Out of the 335 clinical, food and sewage water specimens collected, 105 (31%) isolates of E. coli were identified. The prevalence of the identified isolates among sources was as follows: 19/41 human urine, 46/77 human stool, 23/81 cheese, 2/6 yogurt, 2/2 raw milk, 7/27 cattle meat, 1/5 beef burger, 1/3 pastrami, 1/15chicken, 1/2 fish, 2/7 sewage water. No E. coli isolates were obtained from vegetables, sausage, or luncheon. Serological typing classified the isolated E. coli into four pathotypes. They were distributed as 31 EHEC, 19 EPEC, 11 ETEC, 4 EIEC among clinical isolates and 21 EHEC, 9 EPEC, 8 ETEC, 2 EIEC among food and sewage water isolates. Thus, 52 (49.5%) of the identified isolates were assigned as EHEC which were subjected to further investigation throughout the study. The prevalence of EHEC among E. coli isolated from urine and stool was 57.9% and 43.5%, respectively. Concerning food isolates, it was detected among 62.5% and 52.17% of E. coli isolated from meat and cheese, respectively. Also, all E. coli isolated from yogurt and sewage water were EHEC. Among the detected 52 EHEC isolates, nine serotypes were identified. O111: H2 (23%) was the highest detected serotype followed by O91: H21 (21.2%) and O26: H11(19.2%). Moreover, other serotypes; O55: H7 (11.5%), O117:H4 (9.6%), O126: H21 (9.6%), O113: H4 (1.92%), O121: H7 (1.92%) and O103: H4 (1.92%); were detected. No significant difference was noticed between clinical, food and sewage water isolates concerning the distribution of serotypes (Table ). Phenotypic identification of virulence determinants among EHEC isolates Biofilm formation capacity Biofilm formation capacity was observed in 51/52 (98%) of EHEC isolates. Concerning clinical isolates, 30/31 (96.77%) isolates formed biofilm. These isolates were classified as strong (1, 3.3% isolate), moderate (5, 16.66% isolates), weak (24, 80% isolates) – biofilm producers. All food and sewage water EHEC isolates were biofilm producers. They were classified equally into strong, moderate, and weak—biofilm producers (7, 33.3% isolates each) (Table ). Statistical analysis has revealed that strong biofilm formation capacity was significantly higher among EHEC from food and sewage water sources than from clinical isolates ( P = 0.003) while weak biofilm formation capacity was significantly higher among clinical isolates ( P = 0.001). Phenotypic screening of motility, blood agglutination, hemolysin, and serum resistance Regarding motility testing, 46/52 (88.5%) EHEC isolates showed growth diffusion with red color appeared around the stab line representing motility. While all clinical EHEC isolates were significantly motile, only 71% of food and sewage water EHEC isolates were motile ( P = 0.001). Noticeably, our results revealed that all non-motile isolates were obtained from food sources. For blood agglutination, we found that 18/52 (34.6%) isolates had the ability to agglutinate human RBCs within 5 min of vigorous stirring (Table ). Blood agglutination was more observed among clinical isolates [14 (45%)] as compared to food and sewage water isolates [4 (19%)]. Hemolysis activity (β-hemolysis) was observed in only one yogurt isolate; the remaining 51 isolates (98%) were non-hemolytic (Table ). Among the 52 EHEC isolates, 48 (92.3%) isolates were serum resistant. 29 (93.5%) clinical isolates showed serum resistance [6 (20.69%) serum resistant, 5 (17.24%) intermediate, 18 (62.07%) slow intermediates. Serum resistance was detected in 19 (90.5%) of food and sewage water isolates [7 (36.84%) serum resistant, 2 (10.53%) intermediate, and 10 (52.63%) slow intermediates]. No significant difference was noticed between clinical, food and sewage water EHEC isolates regarding blood agglutination, hemolysis, and serum resistance (Table ). Molecular detection of some virulence genes Molecular detection of stx1, stx2, eae and hemolysin genes All EHEC isolates were tested for toxin associated genes by PCR technique and were shown in (supplementary Fig. 1). Shiga toxin I ( stx1 ) gene was harbored by only two clinical isolates. Subtyping of the two stx1 positive isolates showed that both were of stx 1a subtype. No stx 1c or stx 1d subtypes were detected. Shiga toxin II ( stx2 ) gene was more prevalent than stx1 gene as it was detected among 39/52 (75%) isolates. Stx2 positive clinical isolates (29/31,93.5%) were significantly more abundant than stx2 positive food and sewage water isolates (10/21, 47.6%) ( P = 0.001). Regarding stx positive subtypes, three subtypes were obtained: stx2b , stx2d and stx2g. Among the 39 stx2 positive isolates, stx2g was the highest detected subtype (17/39, 43.59%) followed by stx2b (8/39, 20.5%). stx2d was the least detected subtype (2/39, 5.13%) . None of the isolates were subtyped as stx2a , stx2c , stx2e or stx2f. Worth mentioning, five isolates harbored a combination of stx2b and stx2g subtypes simultaneously. Seventeen of stx2 positive isolates could not be assigned to any of the tested subtypes. Significant difference in the prevalence of stx2g and un- subtyped isolates was found at ( P = 0.019) and ( P = 0.0008), respectively between isolates from different sources (Table ). Additionally, eae gene was detected in 19/52 (36%) isolates. They were distributed as follows: 12/31 (39%) of clinical isolates and 7/21(33%) of food isolates, respectively. Ten out of 20 (50%) EHEC urine isolates and 2/11 (18%) EHEC stool isolates were eae positive. Regarding food isolates, we found that dairy products harbored eae gene more frequently than meat product where 6/14 (42.8%), 1/5 (20%) of dairy and meat products isolates, respectively were eae positive. Hemolysin genes were detected among 47/52 (90%) EHEC isolates. Silent hemolysin ( sheA ) gene was detected in 44/52 (84.6%) isolates. The detection rate was higher among clinical isolates (28/31,90.3%) than food and sewage water isolates (16/21, 76.2%). While α hemolysin ( hlyA ) gene was harbored by only 4/52 (7.7%) isolates which were of clinical origin. Enterohemolysin-a ( ehlyA ) and enterohemolysin-x ( ehxA ) genes were not detected in any isolate (Table ). Virulence genes detected among EHEC isolates were represented in supplementary Fig. 1. Virulence genes profile Gene profiles and their distribution among EHEC isolates were investigated. Fifteen different gene combinations of the tested virulence genes were obtained among the 52 EHEC (Table ). The most prevalent profile was stx2, sheA (10/52,19.2%) followed by stx2, stx2g, sheA, eae (6/52, 11.5%) and stx2, sheA, eae and stx2, stx2g, sheA (5/52, 9.6% each). Nine profiles were unique as they were detected in only one isolate each. One meat product isolate and one sewage water isolate did not harbor any of the tested genes. Clinical isolates showed higher diversity as they revealed 14 different profiles, while only nine profiles were shown by food and sewage water isolates. Phylogenetic groups Phylogenetic typing, based on Clermont’s typing, showed diversity among EHEC isolates. The tested isolates were assigned into seven phylogroups. B1 and C were the predominant phylogroups (15/52,28.8% each) followed by A and D (5/52, 9.6% each), B2 and E (3/52,5.77% each). Only two (3.8%) isolates were assigned to F phylogroup. The distribution of the detected phylogroups among EHEC isolates revealed no correlation between the phylotypes of isolates and their source p > 0.05 (Table ). Genotyping by using ERIC-PCR The tested EHEC isolates were genotyped into 46 different patterns using ERIC-PCR. The largest pattern (P27) comprised four isolates that were obtained from food sources including cheese and meat samples. The clonal relationship between EHEC isolates was analyzed using GelJ software and UPGMA clustering analysis (Fig. ). Results revealed that the majority of isolates had shown similarity of > 70%. The obtained dendrogram clustered the isolates into sixteen clusters. In addition to a group of four (EM1, EM2, EC2 and EC5) and another pair (EC6 and EC9) of food isolates, two pairs of clinical isolates; (CS14 and CS15) and (CU5 and CS3); showed 100% similarity. Multiple sequence analysis of the seven house-keeping genes The analysis of each gene among the three selected isolates is shown in supplementary Table 3. Allele profiles, sequence typing and clonal complex of the selected EHEC isolates The allelic number, the corresponding sequence type and clonal complex were designated by using MLST website. Our results had shown that the three isolates were assigned to different three sequence types (ST) depending on a specific combination of the allele numbers determined automatically using the Pubmlst website, where EC9 was assigned to ST120, CS9 was assigned to ST394 of clonal complex (CC) ST394 Cplx and CU11 was assigned to ST70 (supplementary Table 3). Association between isolates, phylogroups, and phenotypic & genotypic characters Correlation matrix and hierarchical clustering with heat map (Fig. ) were utilized to detect associations between the phenotypic & genotypic features and origin of the isolates to find any potential correlation between the isolates. Heat map classified isolates into two clusters comprising 20 patterns. Of our 52 EHEC isolates, 21 isolates showed 70% similarity.9 groups of isolates showed identical patterns in several traits that are: (EM2, EW2), (CU2, CU3, CU4, CU5, CU6, CS17, CS18, CS20, EM1 and EM4), (EC1, EW1), (EC3, EC5, EM3), (CS2, CS4, CS6, CS8), (CU11, CS3), (CS1, CS10, CS19), (CS5, CS9, CS12) and (EC9, EC11). Out of the 335 clinical, food and sewage water specimens collected, 105 (31%) isolates of E. coli were identified. The prevalence of the identified isolates among sources was as follows: 19/41 human urine, 46/77 human stool, 23/81 cheese, 2/6 yogurt, 2/2 raw milk, 7/27 cattle meat, 1/5 beef burger, 1/3 pastrami, 1/15chicken, 1/2 fish, 2/7 sewage water. No E. coli isolates were obtained from vegetables, sausage, or luncheon. Serological typing classified the isolated E. coli into four pathotypes. They were distributed as 31 EHEC, 19 EPEC, 11 ETEC, 4 EIEC among clinical isolates and 21 EHEC, 9 EPEC, 8 ETEC, 2 EIEC among food and sewage water isolates. Thus, 52 (49.5%) of the identified isolates were assigned as EHEC which were subjected to further investigation throughout the study. The prevalence of EHEC among E. coli isolated from urine and stool was 57.9% and 43.5%, respectively. Concerning food isolates, it was detected among 62.5% and 52.17% of E. coli isolated from meat and cheese, respectively. Also, all E. coli isolated from yogurt and sewage water were EHEC. Among the detected 52 EHEC isolates, nine serotypes were identified. O111: H2 (23%) was the highest detected serotype followed by O91: H21 (21.2%) and O26: H11(19.2%). Moreover, other serotypes; O55: H7 (11.5%), O117:H4 (9.6%), O126: H21 (9.6%), O113: H4 (1.92%), O121: H7 (1.92%) and O103: H4 (1.92%); were detected. No significant difference was noticed between clinical, food and sewage water isolates concerning the distribution of serotypes (Table ). Biofilm formation capacity Biofilm formation capacity was observed in 51/52 (98%) of EHEC isolates. Concerning clinical isolates, 30/31 (96.77%) isolates formed biofilm. These isolates were classified as strong (1, 3.3% isolate), moderate (5, 16.66% isolates), weak (24, 80% isolates) – biofilm producers. All food and sewage water EHEC isolates were biofilm producers. They were classified equally into strong, moderate, and weak—biofilm producers (7, 33.3% isolates each) (Table ). Statistical analysis has revealed that strong biofilm formation capacity was significantly higher among EHEC from food and sewage water sources than from clinical isolates ( P = 0.003) while weak biofilm formation capacity was significantly higher among clinical isolates ( P = 0.001). Phenotypic screening of motility, blood agglutination, hemolysin, and serum resistance Regarding motility testing, 46/52 (88.5%) EHEC isolates showed growth diffusion with red color appeared around the stab line representing motility. While all clinical EHEC isolates were significantly motile, only 71% of food and sewage water EHEC isolates were motile ( P = 0.001). Noticeably, our results revealed that all non-motile isolates were obtained from food sources. For blood agglutination, we found that 18/52 (34.6%) isolates had the ability to agglutinate human RBCs within 5 min of vigorous stirring (Table ). Blood agglutination was more observed among clinical isolates [14 (45%)] as compared to food and sewage water isolates [4 (19%)]. Hemolysis activity (β-hemolysis) was observed in only one yogurt isolate; the remaining 51 isolates (98%) were non-hemolytic (Table ). Among the 52 EHEC isolates, 48 (92.3%) isolates were serum resistant. 29 (93.5%) clinical isolates showed serum resistance [6 (20.69%) serum resistant, 5 (17.24%) intermediate, 18 (62.07%) slow intermediates. Serum resistance was detected in 19 (90.5%) of food and sewage water isolates [7 (36.84%) serum resistant, 2 (10.53%) intermediate, and 10 (52.63%) slow intermediates]. No significant difference was noticed between clinical, food and sewage water EHEC isolates regarding blood agglutination, hemolysis, and serum resistance (Table ). Biofilm formation capacity was observed in 51/52 (98%) of EHEC isolates. Concerning clinical isolates, 30/31 (96.77%) isolates formed biofilm. These isolates were classified as strong (1, 3.3% isolate), moderate (5, 16.66% isolates), weak (24, 80% isolates) – biofilm producers. All food and sewage water EHEC isolates were biofilm producers. They were classified equally into strong, moderate, and weak—biofilm producers (7, 33.3% isolates each) (Table ). Statistical analysis has revealed that strong biofilm formation capacity was significantly higher among EHEC from food and sewage water sources than from clinical isolates ( P = 0.003) while weak biofilm formation capacity was significantly higher among clinical isolates ( P = 0.001). Regarding motility testing, 46/52 (88.5%) EHEC isolates showed growth diffusion with red color appeared around the stab line representing motility. While all clinical EHEC isolates were significantly motile, only 71% of food and sewage water EHEC isolates were motile ( P = 0.001). Noticeably, our results revealed that all non-motile isolates were obtained from food sources. For blood agglutination, we found that 18/52 (34.6%) isolates had the ability to agglutinate human RBCs within 5 min of vigorous stirring (Table ). Blood agglutination was more observed among clinical isolates [14 (45%)] as compared to food and sewage water isolates [4 (19%)]. Hemolysis activity (β-hemolysis) was observed in only one yogurt isolate; the remaining 51 isolates (98%) were non-hemolytic (Table ). Among the 52 EHEC isolates, 48 (92.3%) isolates were serum resistant. 29 (93.5%) clinical isolates showed serum resistance [6 (20.69%) serum resistant, 5 (17.24%) intermediate, 18 (62.07%) slow intermediates. Serum resistance was detected in 19 (90.5%) of food and sewage water isolates [7 (36.84%) serum resistant, 2 (10.53%) intermediate, and 10 (52.63%) slow intermediates]. No significant difference was noticed between clinical, food and sewage water EHEC isolates regarding blood agglutination, hemolysis, and serum resistance (Table ). Molecular detection of stx1, stx2, eae and hemolysin genes All EHEC isolates were tested for toxin associated genes by PCR technique and were shown in (supplementary Fig. 1). Shiga toxin I ( stx1 ) gene was harbored by only two clinical isolates. Subtyping of the two stx1 positive isolates showed that both were of stx 1a subtype. No stx 1c or stx 1d subtypes were detected. Shiga toxin II ( stx2 ) gene was more prevalent than stx1 gene as it was detected among 39/52 (75%) isolates. Stx2 positive clinical isolates (29/31,93.5%) were significantly more abundant than stx2 positive food and sewage water isolates (10/21, 47.6%) ( P = 0.001). Regarding stx positive subtypes, three subtypes were obtained: stx2b , stx2d and stx2g. Among the 39 stx2 positive isolates, stx2g was the highest detected subtype (17/39, 43.59%) followed by stx2b (8/39, 20.5%). stx2d was the least detected subtype (2/39, 5.13%) . None of the isolates were subtyped as stx2a , stx2c , stx2e or stx2f. Worth mentioning, five isolates harbored a combination of stx2b and stx2g subtypes simultaneously. Seventeen of stx2 positive isolates could not be assigned to any of the tested subtypes. Significant difference in the prevalence of stx2g and un- subtyped isolates was found at ( P = 0.019) and ( P = 0.0008), respectively between isolates from different sources (Table ). Additionally, eae gene was detected in 19/52 (36%) isolates. They were distributed as follows: 12/31 (39%) of clinical isolates and 7/21(33%) of food isolates, respectively. Ten out of 20 (50%) EHEC urine isolates and 2/11 (18%) EHEC stool isolates were eae positive. Regarding food isolates, we found that dairy products harbored eae gene more frequently than meat product where 6/14 (42.8%), 1/5 (20%) of dairy and meat products isolates, respectively were eae positive. Hemolysin genes were detected among 47/52 (90%) EHEC isolates. Silent hemolysin ( sheA ) gene was detected in 44/52 (84.6%) isolates. The detection rate was higher among clinical isolates (28/31,90.3%) than food and sewage water isolates (16/21, 76.2%). While α hemolysin ( hlyA ) gene was harbored by only 4/52 (7.7%) isolates which were of clinical origin. Enterohemolysin-a ( ehlyA ) and enterohemolysin-x ( ehxA ) genes were not detected in any isolate (Table ). Virulence genes detected among EHEC isolates were represented in supplementary Fig. 1. stx1, stx2, eae and hemolysin genes All EHEC isolates were tested for toxin associated genes by PCR technique and were shown in (supplementary Fig. 1). Shiga toxin I ( stx1 ) gene was harbored by only two clinical isolates. Subtyping of the two stx1 positive isolates showed that both were of stx 1a subtype. No stx 1c or stx 1d subtypes were detected. Shiga toxin II ( stx2 ) gene was more prevalent than stx1 gene as it was detected among 39/52 (75%) isolates. Stx2 positive clinical isolates (29/31,93.5%) were significantly more abundant than stx2 positive food and sewage water isolates (10/21, 47.6%) ( P = 0.001). Regarding stx positive subtypes, three subtypes were obtained: stx2b , stx2d and stx2g. Among the 39 stx2 positive isolates, stx2g was the highest detected subtype (17/39, 43.59%) followed by stx2b (8/39, 20.5%). stx2d was the least detected subtype (2/39, 5.13%) . None of the isolates were subtyped as stx2a , stx2c , stx2e or stx2f. Worth mentioning, five isolates harbored a combination of stx2b and stx2g subtypes simultaneously. Seventeen of stx2 positive isolates could not be assigned to any of the tested subtypes. Significant difference in the prevalence of stx2g and un- subtyped isolates was found at ( P = 0.019) and ( P = 0.0008), respectively between isolates from different sources (Table ). Additionally, eae gene was detected in 19/52 (36%) isolates. They were distributed as follows: 12/31 (39%) of clinical isolates and 7/21(33%) of food isolates, respectively. Ten out of 20 (50%) EHEC urine isolates and 2/11 (18%) EHEC stool isolates were eae positive. Regarding food isolates, we found that dairy products harbored eae gene more frequently than meat product where 6/14 (42.8%), 1/5 (20%) of dairy and meat products isolates, respectively were eae positive. Hemolysin genes were detected among 47/52 (90%) EHEC isolates. Silent hemolysin ( sheA ) gene was detected in 44/52 (84.6%) isolates. The detection rate was higher among clinical isolates (28/31,90.3%) than food and sewage water isolates (16/21, 76.2%). While α hemolysin ( hlyA ) gene was harbored by only 4/52 (7.7%) isolates which were of clinical origin. Enterohemolysin-a ( ehlyA ) and enterohemolysin-x ( ehxA ) genes were not detected in any isolate (Table ). Virulence genes detected among EHEC isolates were represented in supplementary Fig. 1. Gene profiles and their distribution among EHEC isolates were investigated. Fifteen different gene combinations of the tested virulence genes were obtained among the 52 EHEC (Table ). The most prevalent profile was stx2, sheA (10/52,19.2%) followed by stx2, stx2g, sheA, eae (6/52, 11.5%) and stx2, sheA, eae and stx2, stx2g, sheA (5/52, 9.6% each). Nine profiles were unique as they were detected in only one isolate each. One meat product isolate and one sewage water isolate did not harbor any of the tested genes. Clinical isolates showed higher diversity as they revealed 14 different profiles, while only nine profiles were shown by food and sewage water isolates. Phylogenetic typing, based on Clermont’s typing, showed diversity among EHEC isolates. The tested isolates were assigned into seven phylogroups. B1 and C were the predominant phylogroups (15/52,28.8% each) followed by A and D (5/52, 9.6% each), B2 and E (3/52,5.77% each). Only two (3.8%) isolates were assigned to F phylogroup. The distribution of the detected phylogroups among EHEC isolates revealed no correlation between the phylotypes of isolates and their source p > 0.05 (Table ). The tested EHEC isolates were genotyped into 46 different patterns using ERIC-PCR. The largest pattern (P27) comprised four isolates that were obtained from food sources including cheese and meat samples. The clonal relationship between EHEC isolates was analyzed using GelJ software and UPGMA clustering analysis (Fig. ). Results revealed that the majority of isolates had shown similarity of > 70%. The obtained dendrogram clustered the isolates into sixteen clusters. In addition to a group of four (EM1, EM2, EC2 and EC5) and another pair (EC6 and EC9) of food isolates, two pairs of clinical isolates; (CS14 and CS15) and (CU5 and CS3); showed 100% similarity. The analysis of each gene among the three selected isolates is shown in supplementary Table 3. Allele profiles, sequence typing and clonal complex of the selected EHEC isolates The allelic number, the corresponding sequence type and clonal complex were designated by using MLST website. Our results had shown that the three isolates were assigned to different three sequence types (ST) depending on a specific combination of the allele numbers determined automatically using the Pubmlst website, where EC9 was assigned to ST120, CS9 was assigned to ST394 of clonal complex (CC) ST394 Cplx and CU11 was assigned to ST70 (supplementary Table 3). Association between isolates, phylogroups, and phenotypic & genotypic characters Correlation matrix and hierarchical clustering with heat map (Fig. ) were utilized to detect associations between the phenotypic & genotypic features and origin of the isolates to find any potential correlation between the isolates. Heat map classified isolates into two clusters comprising 20 patterns. Of our 52 EHEC isolates, 21 isolates showed 70% similarity.9 groups of isolates showed identical patterns in several traits that are: (EM2, EW2), (CU2, CU3, CU4, CU5, CU6, CS17, CS18, CS20, EM1 and EM4), (EC1, EW1), (EC3, EC5, EM3), (CS2, CS4, CS6, CS8), (CU11, CS3), (CS1, CS10, CS19), (CS5, CS9, CS12) and (EC9, EC11). The allelic number, the corresponding sequence type and clonal complex were designated by using MLST website. Our results had shown that the three isolates were assigned to different three sequence types (ST) depending on a specific combination of the allele numbers determined automatically using the Pubmlst website, where EC9 was assigned to ST120, CS9 was assigned to ST394 of clonal complex (CC) ST394 Cplx and CU11 was assigned to ST70 (supplementary Table 3). Correlation matrix and hierarchical clustering with heat map (Fig. ) were utilized to detect associations between the phenotypic & genotypic features and origin of the isolates to find any potential correlation between the isolates. Heat map classified isolates into two clusters comprising 20 patterns. Of our 52 EHEC isolates, 21 isolates showed 70% similarity.9 groups of isolates showed identical patterns in several traits that are: (EM2, EW2), (CU2, CU3, CU4, CU5, CU6, CS17, CS18, CS20, EM1 and EM4), (EC1, EW1), (EC3, EC5, EM3), (CS2, CS4, CS6, CS8), (CU11, CS3), (CS1, CS10, CS19), (CS5, CS9, CS12) and (EC9, EC11). The emergence of O157 and non-O157 EHEC foodborne pathogens has become a major concern worldwide . They cause serious disease outbreaks and severe illness in humans such as diarrhea, HC and potentially fatal HUS . The great attention to non-O157 EHEC could be related to the increased prevalence of these organisms in human and animal infections, as well as increased public awareness of the danger of infection caused by non-O157 serogroups . Children under the age of five are more vulnerable and at danger of dying from diarrhea caused by E. coli infection . The goal of this study was to explore the prevalence of EHEC among different sources in Egypt and investigate their molecular characteristics. In addition, the distribution of virulence factors among these isolates was also investigated. A total of 52 (49.5%) EHEC isolates had been identified (Table ). The prevalence of EHEC was similar in both clinical (31/65, 47.7%), food and sewage water (21/40, 52.5%) isolates, where none of these isolates was O157. On the other hand, Ahmed et al. (2017) reported that 33% of their EHEC isolates was O157 . In Germany, one-third of non-O157 HUS-causing EHEC was associated with O26, O103, O111, and O145 serogroups . Serogroups O26, O111, O103, and O145 accounted for one-third of human infections other than O157:H7 such as diarrhea and HUS, in Germany and caused global outbreaks . Of these serogroups, O26, O103 and O111 were detected in our study. Similarly, Ahmed et al. (2017) reported the presence of these three serogroups among their EHEC isolates. A high prevalence of EHEC was observed among our urine and stool isolates (Table ), where prevalence in urine was greater than that of stool. Some studies reported variable rates of isolation of EHEC from clinical sources in Iran (30.77%) and South Africa (0.4%) . Meat products are a substantial cause of human EHEC infections . Rahimi, et al. 2012 reported the presence of EHEC in meat products in Netherland (10.4%), England (13.4%), and Iran (8.2%) . Similarly, EHEC was detected in 14.3% of all meat samples in our study. Interestingly, researchers detected similar rates of EHEC in animal products at 9.5%, . None of our vegetable samples harbored EHEC, despite the detection of EHEC in vegetables from different countries . Raw milk and raw-milk products have been implicated in many diseases and even fatalities . Several countries reported the detection of variable levels of stx gene in their milk products, which is the most crucial virulence for EHEC-associated infections . It was found that 7 out of 12 EHEC cheese isolates were stx gene positive. This finding is alarming and necessitates strict control over the cheese industry to prevent the spread of this organism. Thankfully, yogurt samples that were identified as EHEC among our isolates did not harbor stx gene. E. coli is known to persist in natural environments due to the formation of biofilm . The majority (98.1%) of our non-O157 EHEC isolates were biofilm producers, which is consistent with several previous studies . Strong biofilm production was observed among 3.2% of clinical isolates, where 9% of urine isolates were strong biofilm producers (Table ). A recent study in Egypt, reported that 22.7% of urine isolates were strong biofilm producers . Many bacterial pathogens rely on flagellar motility in the early stages of infection. While Sherfi, et al. 2013 concluded that E. coli motility and indole production are related, only 88.5% of our indole-positive isolates were motile . Hemagglutination (HA) of erythrocytes is believed to be a major virulence in E. coli strains that cause extraintestinal illnesses in human . A higher detection rate of HA was found among clinical isolates (45%) than in food isolates (19%). HA was observed at 60% from our fecal isolates while at 18% from our urine isolates (Table ). It was reported that HA was observed at 25% from urine isolates while at 4% from fecal isolates . Prolonged presence of hemolytic E. coli strains in the host could potentially lead to the onset of extra-intestinal infections , bloodstream infection and sepsis . Beta-hemolytic activity was detected in a yogurt isolate and not detected in clinical isolates (Table ). However, previous studies observed higher percent of hemolytic activity including 42.2%, 25%, 16.8% in E. coli isolates for and , respectively. Serum resistance characteristic enables E. coli to evade the complement system and increases risk of developing septic shock and mortality . In this study, 92.3% of EHEC isolates showed varying levels of serum resistance (Table ). The ability to harbor stx gene is a trait shared by EHEC/STEC isolates. In our research, stx2 gene (75%) was more prevalent than stx1 (3.8%) and a percent of 3.8% of EHEC isolates harbored both genes (Table ). This result is supported by Sallam et al.,2013 in that stx1 and stx2 genes were found in the isolated EHEC strains in 46.7% and 86.7%, respectively and by Jajarmi et al., 2017 who reported that stx1 (52%), stx2 (64%) and 16% of EHEC isolates that harbored both genes were detected among their isolates . The stx1 was found among isolated STEC at low rates 0.16%, 5.2% as reported by Kargar M, Homayoon M. 2015 and Tarazi , et al. 0.2021, respectively and . It has been reported that STEC strains that produce stx2 gene are more likely to cause HUS and could cause more severe neurological symptoms in piglets than strains producing just stx1 or both stx1 and stx2 , whereas stx1 -producing strains induce only diarrhea without systemic complications . Subtyping of the stx1 + isolates showed that they were of stx 1a subtype. The same finding was reported by Elsayed et al who found that no stx1c was detected in non-o157 E. coli isolates . The stx2 group is composed of stx2a, stx2b, stx2c, stx2d, stx2e, stx2f, and stx2g subtypes . Our results indicated that stx2g (43.6%) was the most prevalent subtype among stx2 positive isolates followed by stx2b (20.5%) then stx2d (5.1%), while stx2a, stx2c, stx2e and stx2f were not detected among our isolates (Table ). Elsayed et al. , 2021 demonstrated that no stx2a , stx2d , stx2f , or s tx2g subtypes were detected in their isolates. In contrast, Jajarmi et al., identified stx1a (52%), stx2a (44%), stx2c (44%), and stx2d (30%) in selected STEC isolates . Supporting our findings, stx2c have been detected often in isolates from HUS patients, whereas stx2d is often isolated from cases of uncomplicated diarrhea . The stx2e subtype is predominantly isolated from pigs and pork products . Schmidt et al., 2000 came to a conclusion that stx2f appears to be closely related to STEC of bird and pigeons origins . The eae gene is one of the most common pathogenic genes found in the environment that makes STEC strains more virulent for humans. However, eae-lacking STEC strains caused a minority of sporadic cases of HUS . Several putative non intimin based adhesion has been described, such as long polar fimbriae and STEC auto-agglutinating adhesin in non-O157:H7 STEC . In our study eae gene was detected in 19/52 (36%) isolates. The hemolysin gene that is one of the key pathogenic components of E. coli was found in 92% of our EHEC isolates and its subtypes were detected. Of which, silent hemolysin gene (sheA) was the most prevalent one. Alpha- hemolysin ( hlyA ) gene was detected in 7.7% of clinical isolates only (Table ). Similarly, low prevalence of hlyA gene was reported in 2.25%, 0.9% of cheese and raw milk, respectively . Fifteen different combinations of virulence genes were detected among our 52 EHEC isolates . The most prevalent profile is stx2, sheA (19.2%) followed by stx2, stx2g, sheA, eae (11.5%), stx2, sheA, eae (9.6%) and stx2, stx2g, sheA (9.6%) (Table ). Eklund M, et al . 2002 reported 11 combinations in Finland . Furthermore, we detected stx gene subtypes combinations among clinical isolates only, where four isolates harbored ( stx2b with stx2g) while (stx1a with stx2g) and ( stx1a with stx2b and stx2g) were harbored each by only one isolate, which is similar to Elsayed M, et al.2021 . Various combinations were previously detected . Supporting our findings Dong, et al.2017 reported that stx1d , stx2e, and stx2f were not detected among E. coli isolates . A multiplex PCR system was carried out to classify E. coli strains into phylogenetic groups. Most of our EHEC isolates belonged to non-pathogenic B1 and C phylogroups, and few strains belonged to group E, F (Table ) which is supported by Rúgeles et al . , 2010 . In our EHEC isolates 8/52 belonged to pathogenic B2 and D phylogroups and they were of clinical origin and all harbored stx2 and hemolysin genes while only five of them harbored eae gene. The ERIC-PCR was applied to examine the genetic similarity of EHEC isolates obtained from diverse sources. The isolates were genotyped into forty-six different patterns using ERIC-PCR. Similarity of 100% was observed between many isolates which may be indicative of similar origin of dissemination of similar isolates (Fig. ). Opposing our finding that there was no correlation between ERIC patterns and serotypes as O55:H7 isolates were highly diverged and found in 6 different groups (P1, P6, P12, P18, P19, and P33), Dalla costa et al. reported that related clones like O55:H7 and O157:H7 displayed similar ribotypes and clustered together in a dendrogram . Furthermore, Nicholas Waters et al. demonstrated the relationship between whole-genome phylogeny and the phylotypic classification and concluded that phylotypes and classification at the whole-genome are correlated . On the other hand, our study advocated that there was no correlation between phylogeny and ERIC clusters. The MLST technique is used to study genetic relatedness of isolates, and closely related species can be categorized into clonal complexes . The selection of isolates for MLST was based on using two isolates from clinical source, both belonged to pathogenic phylogroup (D) with one isolate from food source that belonged to non-pathogenic phylogroup (A). The three selected EHEC isolates were classified into three different sequence types demonstrating a clonal complex diversity among the isolates. The EC9 isolated from cheese was assigned to endemic clone ST120 belonging to commensal phylogenetic group A. This endemic clone of ST120 has been previously reported in China . This necessitates immediate action to prevent the spread of this clone. The ST120 strains identified from cormorants were CTX-M-15 producers and belonged to commensal phylogenetic group B1 . Regarding CS9 isolate from stool, it belonged to pathogenic D phylogroup and ST394 (clonal complex: ST394 Cplx). Zahra et al., 2018 detected ST394 from sewage around Pakistan . Alarmingly, ST394 has been associated with sporadic diarrheal outbreaks in various countries . The ST394 that was previously isolated from raw milk belonging to phylogroup D was confirmed to be CTX-M15-producing . Urine isolate (CU11) in our study was assigned to epidemic clone ST70 and belonged to pathogenic D phylogroup. Alarmingly, ST70 was previously detected in the acute intensive care unit, Venezuela indicating that its ability to induce frequent outbreaks . The three detected STs were previously characterized as emerging ESBLs that are capable of degrading expanded-spectrum cephalosporins and monobactams and resistant to aminoglycosides and fluoroquinolones as well as resistance to gentamicin and ciprofloxacin leaving just a few reliable alternative therapies . By comparing results of the three selected isolates by two different typing techniques; ERIC typing method that classified them into 3 variable patterns (P3, P4 and P19) and MLST that classified them into three different sequence types (ST70, ST120, ST394) demonstrating a genetic diversity among the isolates and ensures accuracy of both techniques. Associations between phenotypic, genotypic features and serotypes of isolates (Fig. ) revealed that CU4, CU5 isolates that showed similarity in several traits were obtained from urine and both belonging to pathogenic phylogroup B2 and CS9, CS12 isolates that were obtained from stool showed similarity in several traits are belonging to pathogenic phylogroup D. In conclusion, our study advocated that cheese and meat products represent dangerous threats as source of EHEC infections. Various serotypes were detected among clinical, food and sewage water isolates including: O111: H2, O91: H21, O26: H11, O55: H7, O117: H4, O126: H21, O113: H4, O121: H7 and O103: H4 and this represents a potential risk for public health. It is worrying that 58% of our cheese isolates harbored stx2 gene that is the most important virulence trait for EHEC infections along with 77% of total EHEC isolates having “ stx2 , hemolysin and or eae ” gene combination making them highly virulent. The MLST technique was applied, ST394 detected has been associated with acute and chronic sporadic diarrheal outbreaks in both developed and developing countries, detected epidemic clone ST70 was found to be capable of inducing frequent outbreaks. The endemic ST120 detected in this work was isolated from cheese necessitating critical action to prevent spread of this clone and it is our duty to emphasize the importance of strict control over cheese factories, since they pose the threat of infection with hazardous microorganisms. Our study advocated that pathogenic phylogroups B2 and D were found solely within clinical isolates. With high rate of stx2 gene detection in clinical isolates, representing a major threat for the dissemination of EHEC among hospitals and community-acquired E. coli isolates. Alarmingly, clinical EHEC isolates harbored high virulence genes score and combinations that is causing a public health problem. Our findings might have a substantial influence on the development of preventive strategies for E. coli infections through the identification of potential sources that could serve as a vehicle for the transmission of these pathogenic bacteria. Supplementary Material 1. Supplementary Material 2. Supplementary Material 3. Supplementary Material 4. Supplementary Material 5. |
Prevalence of dental caries and associated factors among secondary school students in Kigali, Rwanda: a cross-sectional study | dbe3e70c-039b-418c-a486-dd038a078d8e | 11847331 | Dentistry[mh] | The prevalence of dental caries is a worldwide alarming health concern both in developed and low-income countries . It is estimated worldwide that 60 − 90% of the population is affected by dental caries . Adolescents experience dental pain and losing teeth due to dental caries. Frequent eating of sugary food and using acidic foods or drinks have been reported to be associated with dental caries . On the other hand, gender, poor oral hygiene behaviors, low socioeconomic status, and lack of access to dental care were also documented to be associated with dental caries . In developed countries, the incidence and prevalence of dental caries have reduced due to improved oral health care systems and programs that focus on prevention, such as the use of fluoride products, applications of fissure sealants, and improved oral hygiene practices . In some Latin America, the Middle East, and South Asian countries, dental caries are quite high . In low-income countries from Sub-Saharan and African countries, the prevalence of dental caries is increasing due to rapid economic growth and lifestyle changes. A high prevalence of dental caries left untreated among adolescents and was reported as the main cause of dental pain, school absenteeism, life-threatening dental infections, and discomfort was associated with untreated dental caries among adolescents . In Rwanda, a study reported that 90% of dental caries remain untreated attributed to low socioeconomic status, lack of dental visits, and poor oral health practices . Seeking dental care was only limited due to dental pain relief or teeth extraction . The current Rwanda national oral health survey revealed two-thirds (64.9%) of the population experienced dental caries with 54.3% of the population having untreated caries . Half of the participants who had dental caries were from the city of Kigali . Recent studies done on dental caries in the city of Kigali showed low oral health care services utilization among adolescents with a high prevalence of untreated dental caries . Dental caries were the main cause of dental pain, school absenteeism, life-threatening dental infections, and oral discomfort . In the city of Kigali, oral health care services are still limited and it is provided by both public and private health institutions. At public health centers, only basic oral care is provided and in some cases transferred to the district and referral hospitals. Dental caries is the main cause of searching for dental care. The main oral health care services are extractions. Private clinics are scarce and they are at a high cost . Like other Low-income countries, oral health care service in Rwanda is not taken as a priority with limited dental infrastructure . On the other hand, poor oral health practices, lack of fluoridated water, not using fluoridated toothpaste, inability to pay for dental care, and poor oral hygiene practices were reported . Determining the prevalence of dental caries in this age group is crucial due to the transition period of behavior during the adolescent stage. Since no similar study was done, this study aimed to determine the prevalence of dental caries and associated factors among secondary schools in Kigali, Rwanda. Study design A quantitative analytical cross-sectional study was conducted from September 2023 up to January 2024 in the city of Kigali, the capital city of Rwanda. It was carried out in 42 secondary schools in both private and public schools. The observation of past caries experience and distributions of associated factors to dental caries in secondary school students were collected. Study setting This study was conducted in the City of Kigali, which is the capital of Rwanda and has 3 districts namely Gasabo, Kicukiro, and Nyarugenge. This city has both urban and rural areas. The study setting was chosen on the basis that the prevalence of dental caries in rural areas was extensively studied. Study population The present study focused on secondary school students aged 12 to 25 years old from both daytime private and public schools. Being a student in one of the selected schools from the city of Kigali was the criteria for inclusion in the study. On the other hand, having any mental problem was an exclusion criterion . According to 2019 Rwanda education statistics, a population of 62,408 students were enrolled in secondary schools in Kigali as a basis to calculate the sample size. This age range was desired for WHO classification and comparison for dental caries surveillance . Sample size determination The study sample used a proportions formula for calculating sample size . Rwanda’s oral health survey reported 55% of adolescents . Where n was the sample size, N was the total population (62408 students), p was the proportion of those who had dental caries (55.7%), q was the proportion of those who don’t have dental caries (44.3%), e: stands for marginal error (4%) and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:{Z}_{\frac{\alpha\:}{2}}^{2}$$\end{document} Stands for critical value which is 1.96 for a 95% confidence level \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\eqalign{ n & = {{N\,Z_{{\textstyle{\alpha \over 2}}}^2pq} \over {{e^2}(N - 1) + Z_{{\textstyle{\alpha \over 2}}}^2pq}} \cr & \quad = {{62408{{(1.96)}^2}*0.557*0.443} \over {(62408 - 1)*{{.04}^2} + {{(1.96)}^2}*0.557*0.443}} \cr & \quad = 586.887 \cong 587\,students \cr} $$\end{document} A total of 646 students participated after adding 10% of the non-respondent rate. A quantitative analytical cross-sectional study was conducted from September 2023 up to January 2024 in the city of Kigali, the capital city of Rwanda. It was carried out in 42 secondary schools in both private and public schools. The observation of past caries experience and distributions of associated factors to dental caries in secondary school students were collected. This study was conducted in the City of Kigali, which is the capital of Rwanda and has 3 districts namely Gasabo, Kicukiro, and Nyarugenge. This city has both urban and rural areas. The study setting was chosen on the basis that the prevalence of dental caries in rural areas was extensively studied. The present study focused on secondary school students aged 12 to 25 years old from both daytime private and public schools. Being a student in one of the selected schools from the city of Kigali was the criteria for inclusion in the study. On the other hand, having any mental problem was an exclusion criterion . According to 2019 Rwanda education statistics, a population of 62,408 students were enrolled in secondary schools in Kigali as a basis to calculate the sample size. This age range was desired for WHO classification and comparison for dental caries surveillance . The study sample used a proportions formula for calculating sample size . Rwanda’s oral health survey reported 55% of adolescents . Where n was the sample size, N was the total population (62408 students), p was the proportion of those who had dental caries (55.7%), q was the proportion of those who don’t have dental caries (44.3%), e: stands for marginal error (4%) and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:{Z}_{\frac{\alpha\:}{2}}^{2}$$\end{document} Stands for critical value which is 1.96 for a 95% confidence level \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\eqalign{ n & = {{N\,Z_{{\textstyle{\alpha \over 2}}}^2pq} \over {{e^2}(N - 1) + Z_{{\textstyle{\alpha \over 2}}}^2pq}} \cr & \quad = {{62408{{(1.96)}^2}*0.557*0.443} \over {(62408 - 1)*{{.04}^2} + {{(1.96)}^2}*0.557*0.443}} \cr & \quad = 586.887 \cong 587\,students \cr} $$\end{document} A total of 646 students participated after adding 10% of the non-respondent rate. The total number of secondary schools in the city of Kigali was 117 schools. Gasabo district had 56 (45.2%), Kicukiro had 31 (26.4%) and Nyarugenge had 30 (25.6%) secondary schools. Based on proportional sampling, we have selected 26 schools from Gasabo, 8 schools from Kicukiro, and 8 schools from Nyarugenge district. Systematic random sampling was used to select study participants with class rosters. 298 (46%) students were taken from the Gasabo district, 180 (27.8%) students were taken from Kicukiro, and 168 (26%) students were taken from Nyarugenge. The interval was every 96th participant on the class roster. The first participant was chosen randomly between 1st and 96th participants. Data collection procedure and research instruments Face-to-face interviews and oral examinations were used to collect data. The researcher executed oral examinations on portable dental chairs at all schools. Data was collected by one research assistant who was a dentist and one recording assistant calibrated during the pilot study. During the training and pilot study, two dentists were trained by the principal investigator about the research tool and DMFT index. An acceptable inter-observer agreement, Cronbach’s alpha coefficient (0.80) was found, however, during data collection, only one dentist collected all the data. The assistant was sitting close enough to the examiner so that instructions and codes were recorded. The examiner also verified the data that was being recorded correctly. Dental caries were assessed based on clinical criteria by the WHO for performing an oral health survey . The questionnaire was adapted from a previous study and translated into Kinyarwanda. Back-translation was used as a quality assurance method. Information regarding social demographics, risk behaviors, and oral hygiene practices was collected. Variables such as age, gender, socioeconomic status, school location, parent’s education level, oral hygiene measures, visits to the dentist, dietary habits, and resident area were collected. The socio-economic status was categorized into 4 categories. Before data collection, the consent form was sent to the parents and students tasked to ask about their families’ socioeconomic status to reduce bias. The frequency of taking sugary foods or drinks was categorised in a way that if the student used sugary foods three times or more as a high frequent intake of sugary foods or drinks Dental caries experience as an outcome variable was recorded through oral examination. An oral examination was conducted in class with disposable dental instruments such as Explorer, dental mirrors, facemasks, gloves, and headlamps were used during oral examination . Dental caries was assessed as WHO recommended for performing an oral health survey . Decayed (D), missing (M), and filled (F) teeth (T) index (DMFT) were recorded from each study participant. The criteria used were based on the WHO recommendation of how to conduct the basic oral survey . To detect carious tooth was based on visually detectable carious lesions in a pit or fissure, or a smooth tooth surface. A tooth that had a restoration but also decayed was classified as a carious tooth. On the other hand, visually detectable caries on the root surface of the tooth was recognized as a decayed tooth. Only missing due to caries was recorded. A very destroyed tooth up to the root was considered a missing tooth. A tooth with restoration without caries was considered a filled tooth. A dental explorer and dental mirror were used to detect decayed teeth likewise a filled tooth. Extracted tooth due to caries was considered as missing due to caries. Missing teeth due to other reasons was excluded . Data analysis Data were entered into an Excel sheet, which was exported into Statistical Package for Social Sciences (SPSS) version 25 for statistical analysis. Descriptive statistics such as frequency and mean of variables were computed to describe data. The DMFT index was classified for the identification of the caries severity index following WHO . DMFT of 0.0-1.1 = Very low, 1.2–2.6, low severity, 2.7–4.4 = Moderate, 4.5–6.5 = High. Social demographic characteristic such as social category was transformed into two categories (poor or rich). For the Rwandan system of classifying economic status, category 1 is very poor (citizens homeless and who cannot feed themselves without assistance) & category 2 is poor (citizens who pay for a house and work for others as hard laborers) are classified as poor, then category 3 is the middle-class (citizen who could pay for their needs and work as professional or owns a business) & category 4 is rich (citizen who are working as a government official, director level or owns massive businesses) . Parent’s education was categorized into 3 categories (no formal, secondary or less, and University level). Qualitative variables such as socioeconomic status and oral health behavior were presented as frequency distributions. Cross-tabulation tables with two independent t-tests and ANOVA tests (Kruskal-Wallis H test) were used for bivariate analysis to test the relationship between independent and dependent variables. Poisson regression analysis was used to assess the associated factors for DMFT. This model was suitable for evaluating count data, with the result variable representing the number of times an event occurs in a given unit of observation. The significance was set at P-value < 0.05 with their odd ratios and 95% confidence intervals. Face-to-face interviews and oral examinations were used to collect data. The researcher executed oral examinations on portable dental chairs at all schools. Data was collected by one research assistant who was a dentist and one recording assistant calibrated during the pilot study. During the training and pilot study, two dentists were trained by the principal investigator about the research tool and DMFT index. An acceptable inter-observer agreement, Cronbach’s alpha coefficient (0.80) was found, however, during data collection, only one dentist collected all the data. The assistant was sitting close enough to the examiner so that instructions and codes were recorded. The examiner also verified the data that was being recorded correctly. Dental caries were assessed based on clinical criteria by the WHO for performing an oral health survey . The questionnaire was adapted from a previous study and translated into Kinyarwanda. Back-translation was used as a quality assurance method. Information regarding social demographics, risk behaviors, and oral hygiene practices was collected. Variables such as age, gender, socioeconomic status, school location, parent’s education level, oral hygiene measures, visits to the dentist, dietary habits, and resident area were collected. The socio-economic status was categorized into 4 categories. Before data collection, the consent form was sent to the parents and students tasked to ask about their families’ socioeconomic status to reduce bias. The frequency of taking sugary foods or drinks was categorised in a way that if the student used sugary foods three times or more as a high frequent intake of sugary foods or drinks Dental caries experience as an outcome variable was recorded through oral examination. An oral examination was conducted in class with disposable dental instruments such as Explorer, dental mirrors, facemasks, gloves, and headlamps were used during oral examination . Dental caries was assessed as WHO recommended for performing an oral health survey . Decayed (D), missing (M), and filled (F) teeth (T) index (DMFT) were recorded from each study participant. The criteria used were based on the WHO recommendation of how to conduct the basic oral survey . To detect carious tooth was based on visually detectable carious lesions in a pit or fissure, or a smooth tooth surface. A tooth that had a restoration but also decayed was classified as a carious tooth. On the other hand, visually detectable caries on the root surface of the tooth was recognized as a decayed tooth. Only missing due to caries was recorded. A very destroyed tooth up to the root was considered a missing tooth. A tooth with restoration without caries was considered a filled tooth. A dental explorer and dental mirror were used to detect decayed teeth likewise a filled tooth. Extracted tooth due to caries was considered as missing due to caries. Missing teeth due to other reasons was excluded . Data were entered into an Excel sheet, which was exported into Statistical Package for Social Sciences (SPSS) version 25 for statistical analysis. Descriptive statistics such as frequency and mean of variables were computed to describe data. The DMFT index was classified for the identification of the caries severity index following WHO . DMFT of 0.0-1.1 = Very low, 1.2–2.6, low severity, 2.7–4.4 = Moderate, 4.5–6.5 = High. Social demographic characteristic such as social category was transformed into two categories (poor or rich). For the Rwandan system of classifying economic status, category 1 is very poor (citizens homeless and who cannot feed themselves without assistance) & category 2 is poor (citizens who pay for a house and work for others as hard laborers) are classified as poor, then category 3 is the middle-class (citizen who could pay for their needs and work as professional or owns a business) & category 4 is rich (citizen who are working as a government official, director level or owns massive businesses) . Parent’s education was categorized into 3 categories (no formal, secondary or less, and University level). Qualitative variables such as socioeconomic status and oral health behavior were presented as frequency distributions. Cross-tabulation tables with two independent t-tests and ANOVA tests (Kruskal-Wallis H test) were used for bivariate analysis to test the relationship between independent and dependent variables. Poisson regression analysis was used to assess the associated factors for DMFT. This model was suitable for evaluating count data, with the result variable representing the number of times an event occurs in a given unit of observation. The significance was set at P-value < 0.05 with their odd ratios and 95% confidence intervals. This study included 646 secondary school students. The response rate was 91.2%. Table shows the socio-economic demographic characteristics of the study participants. Female participants had 50.5% of the study sample whereas male participants were 49.5%. Most of the participants were aged from 15 to 19 years old comprising 442 (68.4%). Half of the participants were from rural areas 324 (50.2%). Regarding parental education, 357(55.3%) of the adolescents’ fathers and 366 (56.7%) of their mothers had attained secondary education. Most of the students came from families with moderate socioeconomic status 260 (40.2%). Table shows dental caries experience. The average DMFT score was 3.3 (SD = 3.9), with the decayed component accounting for 61.1% of the total population. The untreated dental caries component had a mean of 3.17 and SD of 3.7. Notably, 33 (5.1%) of students were missing teeth as a result of dental caries and 11 (1.7%) had teeth filled. K-test: Kruskal-Wallis H test **: significance Table shows the caries severity index, with the Kruskal-Wallis test used to test any differences in dental severity index by age, gender, residence, and social category. The test revealed significant differences in dental caries index by age ( P = 0.034) and by gender ( P < 0.001). Participants who were aged 15 to 19 years had a high decayed missing-filled teeth severity index of 140(70.0%) when compared to other age categories. Female participants had higher severe carious teeth 132 (66.0%) when compared to their male counterparts. Table Shows the distributions of oral health behavior related to dental caries among study participants. The proportion of students who reported using fluoridated toothpaste was 330 (51.1%). Only one-third of study participants 149 (23.1%) have visited a dentist in the past 12 months. Frequent sugary food consumption and regular teeth cleaning were 413 (63.9%) and 361 (55.9%), respectively Table summarizes the bivariate analysis results and Poisson regression analysis used to investigate the factors associated with dental caries experience (DMFT). Through bivariate analysis, gender, age, parent’s education levels, and visits to a dentist predicted dental caries experiences (DMFT). With Poisson regression analysis, females had a 1.5 higher likelihood of dental caries proportions than their male counterparts (AOR = 1.5, CI:1.4–1.6, P < 0.001). Being younger (12–14 years) was 40% less likely to have dental caries (AOR = 0.6, CI:0.5–0.7, P < 0.001), showing that DMFT increased in older participants. However, in the multiple comparison test, there were no significant differences in age groups. Residence and social category were also significant factors for dental caries. Belonging to low-income families was a 20% lower likelihood of proportions of dental caries when compared to participants from rich families (AOR = 0.8, CI:0.7–0.9, P < 0.001). On the other hand, participants who did not use sugary foods or drinks frequently had a 10% lower likelihood of dental caries (AOR = 0.9, CI:0.8–0.9, p < 0.001). However, DMFT was 1.2 higher among participants who did not use fluoridated toothpaste (AOR = 1.2, CI:1.1–1.1.3, P < 0.001), and 1.1 times higher proportions of dental caries among those from rural areas (AOR = 1.1, P < 0.001) when compared to their counterparts. Table shows the caries severity index, with the Kruskal-Wallis test used to test any differences in dental severity index by age, gender, residence, and social category. The test revealed significant differences in dental caries index by age ( P = 0.034) and by gender ( P < 0.001). Participants who were aged 15 to 19 years had a high decayed missing-filled teeth severity index of 140(70.0%) when compared to other age categories. Female participants had higher severe carious teeth 132 (66.0%) when compared to their male counterparts. Table Shows the distributions of oral health behavior related to dental caries among study participants. The proportion of students who reported using fluoridated toothpaste was 330 (51.1%). Only one-third of study participants 149 (23.1%) have visited a dentist in the past 12 months. Frequent sugary food consumption and regular teeth cleaning were 413 (63.9%) and 361 (55.9%), respectively Table summarizes the bivariate analysis results and Poisson regression analysis used to investigate the factors associated with dental caries experience (DMFT). Through bivariate analysis, gender, age, parent’s education levels, and visits to a dentist predicted dental caries experiences (DMFT). With Poisson regression analysis, females had a 1.5 higher likelihood of dental caries proportions than their male counterparts (AOR = 1.5, CI:1.4–1.6, P < 0.001). Being younger (12–14 years) was 40% less likely to have dental caries (AOR = 0.6, CI:0.5–0.7, P < 0.001), showing that DMFT increased in older participants. However, in the multiple comparison test, there were no significant differences in age groups. Residence and social category were also significant factors for dental caries. Belonging to low-income families was a 20% lower likelihood of proportions of dental caries when compared to participants from rich families (AOR = 0.8, CI:0.7–0.9, P < 0.001). On the other hand, participants who did not use sugary foods or drinks frequently had a 10% lower likelihood of dental caries (AOR = 0.9, CI:0.8–0.9, p < 0.001). However, DMFT was 1.2 higher among participants who did not use fluoridated toothpaste (AOR = 1.2, CI:1.1–1.1.3, P < 0.001), and 1.1 times higher proportions of dental caries among those from rural areas (AOR = 1.1, P < 0.001) when compared to their counterparts. This study aimed to determine the prevalence of dental caries and associated factors to dental caries among secondary students in the city of Kigali, Rwanda. The prevalence of dental caries was (61.1%) and mean DMFT was 3.3 (SD = 3.9). The current findings also showed a significant difference in DMFT among female and male participants. Female participants had a greater mean DMFT of 4.1 (SD = 4.2) than male students 2.6 (4.5). The age-related rise in DMFT was also found in the current results, presumably due to the period of exposure of permanent teeth to sugary food eating among older students when compared to younger students. In addition, the higher DMFT was 1.2 times among participants who did not use fluoridated toothpaste frequently. The current study findings were almost similar to another study done in Rwanda that found a prevalence of (64%) and in Vietnam 68.9% . However, these findings were higher than the prevalence found in Indore district, India (47.2%) . On the other hand, this prevalence was lower when compared with other studies done in Kazakhstan(74%) , Russia (77.5%) ; Tanzania (91.5%) , and India (89.3%) . These differences might have been attributed to differences in socioeconomic status study, study setting, and oral health behavior among study participants. The current results showed higher DMFT when compared to other similar studies done in China (2.38) , Tanzania (0.59) , and Sudan(3.06) . It is also reported a higher DMFT when compared to the DMFT set by WHO, suggesting when it is greater than 3 classified as a high caries severity index. This difference might be attributed to a variety of factors such as socioeconomic conditions, availability of healthcare infrastructure, and cultural attitudes towards oral health. Countries with well-established public health systems and preventive care programs tend to report lower DMFT scores . In addition, the current study highlights participants who were coming from low socioeconomic status and participants who were not using sugary foods or drinks frequently, had a reduced proportion of dental caries. This might be the reason that the participants from rural areas do not frequently snack when compared to their urban counterparts. This was due to the low availability and accessibility of sugary foods and drinks in rural areas . When compared to other countries that improved the prevention of dental caries, dental caries experience has decreased significantly in developed countries due to the availability of healthcare systems, such as the United States and several European countries, owing to widespread access to preventive measures such as fluoridated water, regular dental check-ups, and health education programs. The application of these measures has made a substantial contribution to the overall improvement in oral health outcomes among developed-country populations . However, the current results showed few filled teeth and low proportions of participants who had fissure sealant placed on high-risk teeth. Another study done in Korea estimating the prevalence, severity, and dental caries distribution among secondary school children reported a decrease in DMFT indices in 2000 and 2012, with mean DMFT of 3.3, and 2.2 respectively, due to improved healthcare systems and the provision of fissure sealants on posterior teeth . The current study also reported that 76.9% did not visit a dentist in the past 12 months, and this is not the contrary in many low- to middle-income nations. The prevalence of dental caries was higher in these areas because of limited access to dental care services, insufficient infrastructure, a lack of preventative measures with excessive sugar intake, and a lack of oral hygiene practices . Poverty and a lack of knowledge, for example, have a substantial role in impeding effective oral health promotion in these countries . One limitation of the study was the method used to detect dental caries, which relied entirely on visual and tactile screening rather than radiographic pictures. This method most likely resulted in an underestimate of the true prevalence of dental caries. Furthermore, the study utilized a cross-sectional methodology, while appropriate for analyzing variable correlations, only focused on characteristics thought to be predisposed to dental caries. The testing did not detect the presence or absence of caries at any given period. As a result, a cohort study within the examined age range is required to gain a more thorough knowledge of the relationship between dental caries and the underlying causes of tooth decay. This would allow for a longitudinal investigation of the dynamic nature of dental caries development and its link. The third limitation was recall bias might have happened during data collection on variables such as social category, however, this was minimized through working with parents in reporting their social status. The prevalence of dental caries among secondary school students aged 12 to 25 years old in Kigali, Rwanda was high. Factors that were associated with dental caries were gender, age, socioeconomic status, dental service utilization, frequent use of sugary foods or drinks, use of fluoridated toothpaste, and residence of students. This recommended the necessity of resolving inequities in access to oral healthcare and establishing community-based initiatives to improve oral health equality in secondary students in Kigali. This shows that early treatments and preventative strategies are needed for dental caries among Kigali secondary schools. These findings might be used for oral health surveillance and monitoring of dental caries among secondary students in Kigali, Rwanda. |
Detection of dental caries under fixed dental prostheses by analyzing digital panoramic radiographs with artificial intelligence algorithms based on deep learning methods | 46ba839a-226a-47c3-a22a-f114f7125f9d | 11809006 | Dentistry[mh] | Fixed dental prosthesis (FDP) is a good treatment option for patients with partial edentulism . Despite the advances in the techniques and materials, the treatment may fail, resulting with replacement of crowns and bridges . The most common cause of failure in FDPs is dental caries . The difficulties inherent in diagnosing caries under FDPs with traditional methods underscore the necessity for the development of alternative caries detection methods. The impracticality and potential distress associated with the removal of a permanently cemented prosthesis for caries detection under FDPs highlight the need for alternative approaches . In such cases, intraoral and extraoral radiographic methods are used . Panoramic radiography is an extraoral imaging technique that is frequently employed in the context of dental examinations. Panoramic radiography offers a comprehensive visualization of the anatomical structures of the face in a relatively short time. Furthermore, the low radiation dose represents an additional advantage when the imaged area and other imaging modalities are considered . In this sense, it is important to conduct studies that will accelerate caries detection using panoramic radiography and provide practical benefit to physicians . Panoramic radiography provides valuable information to physicians in the process of deciding on the treatment to be performed. However, it should be noted that panoramic radiography may not be a comprehensive solution for the detection of caries . This is due to the resolution and detail issues inherent to panoramic radiography, as well as the formation of metal artefacts, caries presence in the subgingival region, and the radiopacity of FDPs, which impedes the visualization of caries . These limitations for caries detection under fixed dental prostheses indicate the necessity for more sensitive and specific diagnostic methods . The need for computer-aided diagnostic systems that will provide a second opinion to physicians is increasing day by day and new computer aided dental caries detection methods have been searched . In recent years, by using artificial intelligence (AI) and deep learning methods, especially convolutional neural network (CNN), successful results have been obtained for caries detection on dental periapical , bitewing , and panoramic , radiographic images with promising clinical applications. You Only Look Once (YOLO) is a CNN-based algorithm inspired by the human visual system’s ability to simultaneously detect where objects in the image are and how they interact with each other . The YOLOv7 algorithm is one of the YOLO algorithms that have been developed over the years. The accuracy and speed of YOLOv7 have increased compared to the previous YOLO algorithms . It is arguable that the use of AI systems on radiologic images in clinical practice may be better than or equivalent to the ability of clinicians to analyze radiologic images . However, it is thought that increasing the effectiveness of dental caries detection with deep learning algorithms in dental radiography will greatly benefit dentists in their clinical practice . Therefore, the aim of this study was to evaluate the effectiveness of detecting dental caries under FDPs by analyzing panoramic radiographs with the YOLO based CNN models. The null hypothesis of this study was that the AI algorithms based on deep learning methods would not be effective in detecting caries under FDPs by analyzing panoramic radiographs. Dataset The ethical approval (Decision No: 2022.09.22) of the Non-Interventional Research Ethics Committee of the University was obtained for the study. The dataset was obtained from the university’s database. The panoramic radiographs of 1004 patients aged 25–75 years who came to the faculty for examination and/or treatment and had fixed dental prosthesis between the years of 2016 and 2023 were recorded in JPEG format. The inclusion criteria were that at least one of the extraction, filling, root canal treatment or prosthesis renewal procedures because of the progression of caries had been performed in the teeth under the FDPs. This condition was validated from the university’s patient registration system. The exclusion criteria were the absence of FDPs in the radiographic images, the presence of artifacts and super positions that would prevent the optimum evaluation of caries, abutment tooth and missing tooth areas under the FDPs observed on panoramic radiographs. In this study, a panoramic x-ray device having 66 kVp, 10 mA, 16 s irradiation (Gendex GXDP-700, Gendex) was used. Data labelling was performed by two prosthodontists (BA with 4 years of experience and SA with 27 years of experience) making simultaneous decision and consensus on the same computer screen. Labelling was not performed when the diagnoses of two dentists did not match in the dataset. The dataset for this study was determined based on a power analysis conducted using G*Power 3.1 (Heinrich Heine University). The analysis indicated that a total sample size of 1003 was required (sample size group 1 = 903 and sample size group 2 = 100), with 95% power (1-β error probability), a proportion of P1/H1 = 0.71, and a significance level of α = 0.05. Study design This study had two stages. In the first stage, a dataset of 1004 digital panoramic radiographs were divided into a training dataset ( n = 904 [90%]) and a test dataset ( n = 100 [10%]). In the training and test datasets, teeth restored with FDPs were labeled to include their roots. On the test dataset, a CNN-based deep learning model called YOLOv7 achieved a high detection score, and FDPs on all radiographs were automatically detected and cropped (selectively extracted from the original radiographic images) by the same model . In the second stage, the new dataset of cropped images was divided into a training dataset ( n = 2248 [91%]) and a test dataset ( n = 219 [9%]). Caries under the FDPs were detected with the YOLOv7 model and the improved YOLOv7 model that uses the convolutional block attention module (CBAM) . The performance of the deep learning models was evaluated with recall, precision, mAP, and F1 scores . The study plan and number of labels are shown in Fig. . Data labelling process A total of 1004 panoramic radiographs from the dataset were resized to 640 × 640 pixels and converted into JPEG file format . In the labelling process, FDPs, abutments and missing teeth areas (regions of interest [ROI]) on the panoramic images were all labeled. Totally 2248 labels were made on the training dataset of 904 images and 243 labels were made on the test dataset of 100 images. After the trained YOLOv7 model detected FDPs, ROIs were automatically cropped from the main radiographic images (Fig. B, E, H). Then, dental caries, caries-free abutment teeth and missing teeth under the FDPs were labeled by the BA and SA using the cropped images (Fig. C, F, I). All abutment teeth that were found to be caries-free were labeled as “healthy”. The cropped images were resized to the dimensions accepted by YOLOv7. After resizing, it was not possible to label the abutments in some images as healthy or caries. Considering the factors mentioned above, it was decided to exclude 24 images from the test dataset. This is because the test dataset will affect the final outcome of the study. In the 2248 training dataset, 950 labels for caries, 1978 labels for missing teeth, and 4158 labels for healthy abutment were obtained. In the 219 test datasets, 110 labels for caries, 258 labels for missing teeth and 502 labels for healthy abutments were obtained (Table ). Predictions (Fig. D, G, J) of the trained deep learning models were obtained with the test dataset. Representative examples of the cropped radiographs, unlabeled and labeled images, and prediction of the YOLOv7 model are presented in Fig. . Caries detection on panoramic radiographs with YOLOv7 and YOLOv7 + CBAM In this study, the YOLOv7 algorithm, which comprises the input, spine, neck, and head components, is employed to identify dental caries under FDPs. The default setting of the YOLOv7 model is 640 × 640 pixels. This input image size allows to minimize the calculation cost of the model. Consequently, in the present study, the input module resized the input images to 640 × 640 pixels, the accepted size for the YOLOv7 architecture, prior to feeding them to the spine network . In this study, the Convolutional Block Attention Module (CBAM), which is one of the attention mechanisms that have been shown to improve caries detection performance under FDPs, was integrated into the YOLOv7 model . Although the CBAM modules increase performance, they can cause degradation instead of increasing performance if the number of the CBAM modules is not chosen correctly. In this study, CBAM modules were integrated into various regions within the architecture, and using three CBAM modules after the ELAN modules achieved the best detection performance (Fig. ). Development environment and model hyperparameters The deep learning models were improved using the Python programming language. PyTorch and OpenCV libraries were used in the setup and testing of the models. The whole process was carried out using a computer having a Nvidia 1080Ti graphics card, two Xenon processors, 32 GB of RAM and Ubuntu 20.04 operating system . The training process was configured with the following hyperparameters: The model was trained for 200 epochs with a batch size of 16. The Intersection over Union (IoU) threshold was set to 0.50, while the confidence threshold was set to 0.20. The Adam optimizer was employed with a learning rate of 0.01. The momentum value was set to 0.93, and the weight decay parameter was configured as 0.0005. The input size for the model was defined as 640 × 640 × 3. Performance analysis Performance metrics were used to compare the success of the developed deep learning models. In deep learning, the similarity between the values labeled on the test data (ground truth) and the predicted value of the model is measured by the IoU value . The object detection success of the model is evaluated by comparing the threshold and IoU values. In our study, the threshold value was set as 0.5 . The predictive values of the model trained with real values in binary classification or multi classification tasks are shown in the confusion matrix that is a significant table that sums up the actual and predicted circumstances . The confusion matrix and the terminology used for the matrix in this study were as follows: True Positive (TP): In the area where the object is labeled, the model indicates the object exists. False Positive (FP): In the area where the object is not labeled, the model indicates the object exists. False Negative (FN): In the area where the object is not labeled, the model indicates the object does not exist. True Negative (TN): In the area where the object is labeled, the model indicates the object does not exist. TN value is generally not used in object detection . After TP, FP, and FN values were obtained, model performance evaluation was made by calculating recall, precision, mAP, and F1 scores together with these values. Precision (P) was the ratio showing how many of the positive predictions were correctly predicted. Recall (R) was the percentage of positive samples that are correctly predicted. The F1 score was the harmonic average of the P and R values reduced to a single number. It was used instead of accuracy. The mean average precision (mAP) is a metric used to evaluate the performance of a model for tasks such as information extraction and object detection . The equations are given below (Eqs. – ). 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:Precision=\frac{True\:Positive}{\left(True\:Positive+False\:Positive\right)}$$\end{document} 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:Recall=\frac{True\:Positive}{\left(True\:Positive+False\:Negative\right)}$$\end{document} 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:F1-Score=\frac{2\left(Precision\times\:Recall\right)}{\left(Precision+Recall\right)}$$\end{document} 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:mAP=\frac{1}{N}\sum\:_{i=1}^{N}{AP}_{i}$$\end{document} The ethical approval (Decision No: 2022.09.22) of the Non-Interventional Research Ethics Committee of the University was obtained for the study. The dataset was obtained from the university’s database. The panoramic radiographs of 1004 patients aged 25–75 years who came to the faculty for examination and/or treatment and had fixed dental prosthesis between the years of 2016 and 2023 were recorded in JPEG format. The inclusion criteria were that at least one of the extraction, filling, root canal treatment or prosthesis renewal procedures because of the progression of caries had been performed in the teeth under the FDPs. This condition was validated from the university’s patient registration system. The exclusion criteria were the absence of FDPs in the radiographic images, the presence of artifacts and super positions that would prevent the optimum evaluation of caries, abutment tooth and missing tooth areas under the FDPs observed on panoramic radiographs. In this study, a panoramic x-ray device having 66 kVp, 10 mA, 16 s irradiation (Gendex GXDP-700, Gendex) was used. Data labelling was performed by two prosthodontists (BA with 4 years of experience and SA with 27 years of experience) making simultaneous decision and consensus on the same computer screen. Labelling was not performed when the diagnoses of two dentists did not match in the dataset. The dataset for this study was determined based on a power analysis conducted using G*Power 3.1 (Heinrich Heine University). The analysis indicated that a total sample size of 1003 was required (sample size group 1 = 903 and sample size group 2 = 100), with 95% power (1-β error probability), a proportion of P1/H1 = 0.71, and a significance level of α = 0.05. This study had two stages. In the first stage, a dataset of 1004 digital panoramic radiographs were divided into a training dataset ( n = 904 [90%]) and a test dataset ( n = 100 [10%]). In the training and test datasets, teeth restored with FDPs were labeled to include their roots. On the test dataset, a CNN-based deep learning model called YOLOv7 achieved a high detection score, and FDPs on all radiographs were automatically detected and cropped (selectively extracted from the original radiographic images) by the same model . In the second stage, the new dataset of cropped images was divided into a training dataset ( n = 2248 [91%]) and a test dataset ( n = 219 [9%]). Caries under the FDPs were detected with the YOLOv7 model and the improved YOLOv7 model that uses the convolutional block attention module (CBAM) . The performance of the deep learning models was evaluated with recall, precision, mAP, and F1 scores . The study plan and number of labels are shown in Fig. . A total of 1004 panoramic radiographs from the dataset were resized to 640 × 640 pixels and converted into JPEG file format . In the labelling process, FDPs, abutments and missing teeth areas (regions of interest [ROI]) on the panoramic images were all labeled. Totally 2248 labels were made on the training dataset of 904 images and 243 labels were made on the test dataset of 100 images. After the trained YOLOv7 model detected FDPs, ROIs were automatically cropped from the main radiographic images (Fig. B, E, H). Then, dental caries, caries-free abutment teeth and missing teeth under the FDPs were labeled by the BA and SA using the cropped images (Fig. C, F, I). All abutment teeth that were found to be caries-free were labeled as “healthy”. The cropped images were resized to the dimensions accepted by YOLOv7. After resizing, it was not possible to label the abutments in some images as healthy or caries. Considering the factors mentioned above, it was decided to exclude 24 images from the test dataset. This is because the test dataset will affect the final outcome of the study. In the 2248 training dataset, 950 labels for caries, 1978 labels for missing teeth, and 4158 labels for healthy abutment were obtained. In the 219 test datasets, 110 labels for caries, 258 labels for missing teeth and 502 labels for healthy abutments were obtained (Table ). Predictions (Fig. D, G, J) of the trained deep learning models were obtained with the test dataset. Representative examples of the cropped radiographs, unlabeled and labeled images, and prediction of the YOLOv7 model are presented in Fig. . In this study, the YOLOv7 algorithm, which comprises the input, spine, neck, and head components, is employed to identify dental caries under FDPs. The default setting of the YOLOv7 model is 640 × 640 pixels. This input image size allows to minimize the calculation cost of the model. Consequently, in the present study, the input module resized the input images to 640 × 640 pixels, the accepted size for the YOLOv7 architecture, prior to feeding them to the spine network . In this study, the Convolutional Block Attention Module (CBAM), which is one of the attention mechanisms that have been shown to improve caries detection performance under FDPs, was integrated into the YOLOv7 model . Although the CBAM modules increase performance, they can cause degradation instead of increasing performance if the number of the CBAM modules is not chosen correctly. In this study, CBAM modules were integrated into various regions within the architecture, and using three CBAM modules after the ELAN modules achieved the best detection performance (Fig. ). The deep learning models were improved using the Python programming language. PyTorch and OpenCV libraries were used in the setup and testing of the models. The whole process was carried out using a computer having a Nvidia 1080Ti graphics card, two Xenon processors, 32 GB of RAM and Ubuntu 20.04 operating system . The training process was configured with the following hyperparameters: The model was trained for 200 epochs with a batch size of 16. The Intersection over Union (IoU) threshold was set to 0.50, while the confidence threshold was set to 0.20. The Adam optimizer was employed with a learning rate of 0.01. The momentum value was set to 0.93, and the weight decay parameter was configured as 0.0005. The input size for the model was defined as 640 × 640 × 3. Performance metrics were used to compare the success of the developed deep learning models. In deep learning, the similarity between the values labeled on the test data (ground truth) and the predicted value of the model is measured by the IoU value . The object detection success of the model is evaluated by comparing the threshold and IoU values. In our study, the threshold value was set as 0.5 . The predictive values of the model trained with real values in binary classification or multi classification tasks are shown in the confusion matrix that is a significant table that sums up the actual and predicted circumstances . The confusion matrix and the terminology used for the matrix in this study were as follows: True Positive (TP): In the area where the object is labeled, the model indicates the object exists. False Positive (FP): In the area where the object is not labeled, the model indicates the object exists. False Negative (FN): In the area where the object is not labeled, the model indicates the object does not exist. True Negative (TN): In the area where the object is labeled, the model indicates the object does not exist. TN value is generally not used in object detection . After TP, FP, and FN values were obtained, model performance evaluation was made by calculating recall, precision, mAP, and F1 scores together with these values. Precision (P) was the ratio showing how many of the positive predictions were correctly predicted. Recall (R) was the percentage of positive samples that are correctly predicted. The F1 score was the harmonic average of the P and R values reduced to a single number. It was used instead of accuracy. The mean average precision (mAP) is a metric used to evaluate the performance of a model for tasks such as information extraction and object detection . The equations are given below (Eqs. – ). 1 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:Precision=\frac{True\:Positive}{\left(True\:Positive+False\:Positive\right)}$$\end{document} 2 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:Recall=\frac{True\:Positive}{\left(True\:Positive+False\:Negative\right)}$$\end{document} 3 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:F1-Score=\frac{2\left(Precision\times\:Recall\right)}{\left(Precision+Recall\right)}$$\end{document} 4 \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\:mAP=\frac{1}{N}\sum\:_{i=1}^{N}{AP}_{i}$$\end{document} Evaluation of the YOLOv7 model for detecting FDPs on panoramic radiography Of the 1004 panoramic radiographs, 904 were identified as the training dataset and 100 were identified as the test dataset. The trained YOLOv7 model evaluated 230 of the 243 FDP labels in 100 images in the test group as TP, 8 as FP and 13 as FN (Fig. ). As result of this evaluation, 0.947 recall, 0.966 precision, 0.968 mAP, and 0.956 F1 scores were obtained (Table ). Detection scores of caries under FDPs with YOLOv7 and YOLOv7 + CBAM After the trained YOLOv7 model detected FDPs, ROIs were automatically cropped from the main radiographic images and 2491 images were obtained. Among the cropped images, 2248 images were used as the training dataset and 219 images were used as the test dataset. The trained YOLOv7 model predicted 87 of the 110 caries labels in the test group as TP, 17 as FP and 23 as FN. Of the 258 missing tooth labels, 242 were TP, 59 were FP, and 16 were FN. Of the 502 caries-free abutment labels, 490 were predicted as TP, 29 as FP and 12 as FN (Fig. A). The YOLOv7 model had 0.791 recall, 0.837 precision, 0.800 mAP, and 0.813 F1 scores for dental caries labelling on the cropped images; 0.939 recall, 0.804 precision, 0.931 mAP, and 0.866 F1 scores for the missing teeth. For the caries-free abutments, 0.976 recall, 0.945 precision, 0.978 mAP, and 0.960 F1 scores were obtained (Table ). After training with the original YOLOv7 model, attention mechanisms (CBAM) were integrated into this model to better detection, and it was observed that the caries detection performance of the model increased. The trained YOLOv7 + CBAM model identified 91 of the 110 caries labels in the test group as TP, 18 as FP and 19 as FN. Of the 258 missing tooth labels, 238 were TP, 52 were FP, and 20 were FN. Of the 502 caries-free abutment labels, 484 were predicted as TP, 28 as FP, and 18 as FN (Fig. B). As a result of training with the YOLOv7 + CBAM model, 0.827 recall, 0.834 precision, 0.846 mAP, and 0.830 F1 scores for the caries labels, 0.922 recall, 0.821 precision, 0.933 mAP, and 0.868 F1 scores for the missing tooth labels, and 0.964 recall, 0.945 precision, 0.973 mAP, and 0.954 F1 scores for the caries-free abutment labels were obtained (Table ). Of the 1004 panoramic radiographs, 904 were identified as the training dataset and 100 were identified as the test dataset. The trained YOLOv7 model evaluated 230 of the 243 FDP labels in 100 images in the test group as TP, 8 as FP and 13 as FN (Fig. ). As result of this evaluation, 0.947 recall, 0.966 precision, 0.968 mAP, and 0.956 F1 scores were obtained (Table ). After the trained YOLOv7 model detected FDPs, ROIs were automatically cropped from the main radiographic images and 2491 images were obtained. Among the cropped images, 2248 images were used as the training dataset and 219 images were used as the test dataset. The trained YOLOv7 model predicted 87 of the 110 caries labels in the test group as TP, 17 as FP and 23 as FN. Of the 258 missing tooth labels, 242 were TP, 59 were FP, and 16 were FN. Of the 502 caries-free abutment labels, 490 were predicted as TP, 29 as FP and 12 as FN (Fig. A). The YOLOv7 model had 0.791 recall, 0.837 precision, 0.800 mAP, and 0.813 F1 scores for dental caries labelling on the cropped images; 0.939 recall, 0.804 precision, 0.931 mAP, and 0.866 F1 scores for the missing teeth. For the caries-free abutments, 0.976 recall, 0.945 precision, 0.978 mAP, and 0.960 F1 scores were obtained (Table ). After training with the original YOLOv7 model, attention mechanisms (CBAM) were integrated into this model to better detection, and it was observed that the caries detection performance of the model increased. The trained YOLOv7 + CBAM model identified 91 of the 110 caries labels in the test group as TP, 18 as FP and 19 as FN. Of the 258 missing tooth labels, 238 were TP, 52 were FP, and 20 were FN. Of the 502 caries-free abutment labels, 484 were predicted as TP, 28 as FP, and 18 as FN (Fig. B). As a result of training with the YOLOv7 + CBAM model, 0.827 recall, 0.834 precision, 0.846 mAP, and 0.830 F1 scores for the caries labels, 0.922 recall, 0.821 precision, 0.933 mAP, and 0.868 F1 scores for the missing tooth labels, and 0.964 recall, 0.945 precision, 0.973 mAP, and 0.954 F1 scores for the caries-free abutment labels were obtained (Table ). In this study, promising results were obtained using deep learning models to detect dental caries under FDPs by analyzing digital panoramic radiographs. While the original YOLOv7 model obtained 0.791 recall, 0.837 precision, 0.800 mAP, and 0.813 F1 scores for the caries labels, 0.827 recall, 0.834 precision, 0.846 mAP, and 0.830 F1 scores were obtained for the caries labels as a result of training with the YOLOv7 + CBAM model. Therefore, the null hypothesis, which states that the detection of dental caries under FDPs by analyzing digital panoramic radiographs with artificial intelligence algorithms based on deep learning methods would not be efficient, was rejected. Bitewing radiography is the most used imaging technique for the detection of approximal caries . In the study conducted by Bayraktar et al. , in which approximal caries lesions were detected using the YOLOv3 algorithm on bitewing radiographs, the recall and precision scores were obtained as 0.7226 and 0.9819, respectively. In another study by Panyarak et al. using the YOLOv7 algorithm on bitewing radiographs, 0.605 precision and 0.512 recall scores for caries detection were obtained. In this study, it was shown that model detection scores obtained with bitewing imaging using panoramic radiographs can be obtained within the specified inclusion and exclusion criteria. Several studies focused on dental caries detection by using deep learning on panoramic radiographs . One of these studies used The AI model on panoramic images, the recall scores were 0.9674 for crown, 0.3026 for caries detection, and precision scores were 0.8600 for crown, 0.5096 for caries detection . Although the recall score (0.947) and precision score (0.966) of crown detection of the present study were consistent with this study, recall and precision scores for caries detection were found lower than the scores of the present study, which were 0.791 (without the CBAM module) and 0.827 (with the CBAM module) for recall scores, and 0.837 (without the CBAM module) and 0.834 (with the CBAM module) for precision scores. The higher scores in the current study can be explained by the different models, the use of the CBAM module and the presence of a larger number of ground truths. In another study, the detection of FDPs on panoramic radiographs with the YOLOv4 model was performed using 521 panoramic radiographs selected from 5126 panoramic radiographs as test data. In that study, 0.79 recall, 0.74 precision, and 0.76 F1 scores were obtained for crown labels and 0.95 recall, 0.84 precision, and 0.89 F1 scores were obtained for bridge labels . The authors are not aware of any study in the literature evaluating the detection of caries under FDPs. Therefore, the results of the present study could not be compared with the results of other studies. The application of image cropping in this study had two aims. Firstly, to augment the quantity of data and secondly, to enhance the model’s detection of caries, healthy abutments, and missing teeth under FDPs. In a similar study by Chen et al. 1525 periapical radiographs were converted into single-tooth images using image cropping and detection scores were obtained using different CNN algorithms for simultaneous detection of periodontitis and caries on periapical radiographs. In present study, attention mechanisms (CBAM) were integrated into the YOLOv7 to better predict the study parameters and dental caries under FDPs . Another study, which used the CBAM module with the original YOLOv7 algorithm, concluded that the prediction scores of the algorithm increased for recall and mAP in comparison to the original YOLOv7 algorithm . These results are consistent with the present study, where the caries detection scores of recall, mAP, and F1 increased by using the YOLOv7 + CBAM model (Table ). Although the recall values for caries detection under the FDPs were 0.791 with the YOLOv7 model and 0.827 with the YOLOv7 + CBAM model, they still need improvement for clinical practice. Bitewing radiographs are widely preferred for detecting dental caries due to their superior resolution and sensitivity. However, their exclusion from this study represents a significant limitation. The absence of bitewing radiographs means that certain advantages of this modality, such as reduced artifact interference and higher detail resolution, were not evaluated in comparison to panoramic radiographs. Future studies could address this by adopting a study design that incorporates both panoramic and bitewing radiographs, enabling a more comprehensive comparison of diagnostic effectiveness and elucidating the scenarios where each modality performs best. On the other hand, panoramic radiographs are commonly employed in clinical practice due to their ability to provide simultaneous visualization of all teeth and associated structures. For this reason, they were chosen in this study to evaluate their utility in detecting caries beneath fixed dental prostheses (FDPs). However, relying solely on panoramic radiographs may amplify the limitations of this method, such as challenges in detecting subgingival caries or lesions obscured by metal artifacts from FDPs. These inherent drawbacks could have influenced the results and need to be addressed in future research. To enhance the reliability and applicability of deep learning models in clinical settings, future research should consider integrating multimodal datasets combining panoramic and bitewing radiographs. Training deep learning algorithms with data from both modalities may leverage their complementary strengths, improving diagnostic accuracy and robustness. Additionally, including radiographic images from a variety of dental imaging devices could enhance the generalizability of the algorithms to different clinical environments. Furthermore, incorporating patient-specific factors such as age, dental condition, or type of restoration into model training could refine diagnostic performance and yield more precise results. In conclusion, while this study demonstrated promising results in detecting caries under FDPs using non-bitewing radiographic techniques, the integration of diverse imaging modalities and datasets is necessary to build a more robust and reliable foundation for future clinical applications. Within the limitations of this study, it was demonstrated that AI algorithms based on deep learning models, particularly YOLOv7 + CBAM, achieved promising results in detecting dental caries under fixed dental prostheses (FDPs) using panoramic radiographs. The improved recall (0.827) and precision (0.834) scores highlight the potential of attention mechanisms in enhancing diagnostic accuracy. However, the exclusive use of panoramic radiographs limits the findings, as bitewing radiographs or multimodal imaging could provide greater sensitivity and detail. Future research should incorporate diverse datasets, multimodal approaches, and clinical validation to enhance the generalizability and practical application of these AI models in dental diagnostics. |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.